Research Article  Open Access
SVMRFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Abstract
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on twogroup classification problems. This study combines feature selection and SVM recursive feature elimination (SVMRFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVMRFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters and to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVMRFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
1. Introduction
The support vector machine (SVM) is one of the important tools of machine learning. The principle of SVM operation is as follows: a given group of classified data is trained by the algorithm to obtain a group of classification models, which can help predict the category of the new data [1, 2]. Its scope of application is widely used in various fields, such as disease or medical imaging diagnosis [3–5], financial crisis prediction [6], biomedical engineering, and bioinformatics classification [7, 8]. Although SVM is an efficient machine learning method, its classification accuracy requires further improvement in the case of multidimensional space classification and dataset for feature interaction variables [9]. Regarding such problems, in general, feature selection can be applied to reduce data structure complexity in order to identify important feature variables as a new set of testing instances [10]. By feature selection, inappropriate, redundant, and noise data of each problem can be filtered to reduce the computational time of classification and improve classification accuracy. The common methods of feature selection include backward feature selection (BFS), forward feature selection (FFS), and ranker [11]. Another feature selection method, support vector machine recursive feature elimination (SVMRFE), can filter relevant features and remove relatively insignificant feature variables in order to achieve higher classification performance [12]. The research findings of Harikrishna et al. have shown that computation is simpler and can more effectively improve classification accuracy in the case of datasets after SVMREF selection [13–15].
As SVM basically applies on twoclass data [16], many scholars have explored the expansion of SVM on multiclass data [17–19]. However, classification accuracy is not ideal. There are many studies on choosing kernel parameters for SVM [20–22]. Therefore, this study applies SVMRFE to sort the 33 variables for Dermatology dataset and 16 variables for Zoo dataset by explanatory power in descending order and selects different feature sets before using the Taguchi parameter design to optimize Multiclass SVM parameters and to improve the classification accuracy for SVM multiclass classifier.
This study is organized as follows. Section 2 describes the research data; Section 3 introduces methods used through this paper; Section 4 discusses the experiment and results. Finally, Section 5 presents our conclusions.
2. Study Population
This study used the Dermatology dataset from University of California at Irvine (UCI) and the Zoo database from its College of Information Technology and Computers to conduct experimental tests, parameter optimization, and classification accuracy performance evaluation, using the SVM classifier.
In medicine, dermatological diseases are diseases of the skin that have a serious impact on health. As frequently occurring types of diseases, there are more than 1000 kinds of dermatological diseases, such as psoriasis, seborrheic dermatitis, lichen planus, pityriasis, chronic dermatitis, and pityriasis rubra pilaris. The Dermatology dataset was established by Nilsel in 1998 and contains 33 feature variables and 1 class variable (6class).
The dermatology feature variables and data summary are as shown in Table 1. The Dermatology dataset has eight omissions. After removing the eight omissions, we retained 358 (instances) for this study. The instances of data of various categories are psoriasis (Class 1): 111 instances, seborrheic dermatitis (Class 2): 71 instances, lichen planus (Class 3): 60 instances, pityriasis (Class 4): 48 instances, chronic dermatitis (Class 5): 48 instances, and pityriasis rubra pilaris (Class 6): 20 instances. The Zoo dataset contains 17 Booleanvalued attributes and 101 instances. The instances of data of various categories are as follows: bear, and so forth (Class 1) 41 instances; chicken, and so forth (Class 2) 20 instances; seasnake, and so forth (Class 3) 5 instances; bass, and so forth (Class 4) 13 instances; (Class 5) 4 instances; frog, and so forth (Class 6) 8 instances; and honeybee, and so forth (Class 7) 10 instances.

Before feature selection, we conducted feature attribute coding. The feature attribute coding of Dermatology and Zoo databases is as shown in Tables 2 and 3.


3. Methodology
3.1. Research Framework
The research framework of the study is shown in Figure 1. Steps are as follows.(1)Database preprocessing: delete the omissions and feature variable coding for Dermatology and Zoo datasets. And there are 358 and 101 instances left for Dermatology and Zoo databases for further experiment, respectively.(2)Feature selection: apply SVMRFE ranking according to the order of importance of the features, and determine the feature set that contributes to the classification.(3)Parameter optimization: apply Taguchi parameter design in the parameters ( & ) optimization of a Multiclass SVM Classifier in order to enhance the classification accuracy for the multiclass dataset.
3.2. Feature Selection
Feature selection implies not only cardinality reduction, which means imposing an arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the choice of attributes, meaning that either the analyst or the modeling tool actively selects or discards attributes based on their usefulness for analysis. The feature selection method is a search strategy to select or remove some features of the original feature set to generate various types of subsets to obtain the optimum feature subset. The subsets selected each time are compared and analyzed according to the formulated assessment function. If the subset selected in step is better than the subset selected in step , the subset selected in step can be selected as the optimum subset.
3.3. Linear Support Vector Machine (Linear SVM)
SVM is developed from statistical learning theory, as based on SRM (structural risk minimization). It can be applied on classification and nonlinear regression [6]. Generally speaking, SVM can be divided into linear SVM (linear SVM) and nonlinear SVM, described as follows.
(1) Linear SVM. The linear SVM encodes the training data of different types by classification with Class 1 as being “” and Class 2 as being “−1” and the mathematical symbol is ; the hyperplane is represented as follows: where denotes weight vector, denotes the input dataset, and denotes a constant as a bias (displacement) in the hyperplane. The purpose of bias is to ensure that the hyperplane is in the correct position after horizontal movement. Therefore, bias is determined after training . The parameters of the hyperplane include and . When SVM is applied on classification, the hyperplane is regarded as a decision function: Generally speaking, the purpose of SVM is to obtain the hyperplane of the maximized marginal distance and improve the distinguishing function between the two categories of the dataset. The process of optimizing the distinguishing function of the hyperplane can be regarded as a quadratic programming problem: The original minimization problem is converted into a maximization problem by using the Lagrange Theory: Finally, the linear divisive decision making function is If , it means the sample is in the same category as samples marked with “”; otherwise, it is in the category of samples marked with “.” When the training data include noise, the linear hyperplane cannot accurately distinguish data points. By introducing slack variables in the constraint, the original (3) can be modified into the following: where is the distance between the boundary and the classification point and penalty parameter represents the cost of the classification error of training data during the learning process, as determined by the user. When is greater, the margin will be smaller, indicating that the fault tolerance rate will be smaller when a fault occurs. Otherwise, when is smaller, the fault tolerance rate will be greater. When , the linear inseparable problem will degenerate into a linear separable problem. In this case, the solution of the above mentioned optimization problem can be applied to obtain the various parameters and optimum solution of the target function using the Lagrangian coefficient; thus, the linear inseparable dual optimization problem is as follows: Finally, the linear decisionmaking function is
(2) Nonlinear Support Vector Machine (Nonlinear SVM). When input training samples cannot be separated using linear SVM, we can use conversion function to convert the original 2dimensional data into a new highdimensional feature space for linear separable problem. SVM can efficiently perform a nonlinear classification using what is called the kernel trick, implicitly mapping their inputs into highdimensional feature spaces. Presently, many different core functions have been proposed. Using different core functions regarding different data features can effectively improve the computational efficiency of SVM. The relatively common core functions include the following four types:(1)linear kernel function: (2)polynomial kernel function: (3)radial basis kernel function: (4)sigmoid kernel function: where the emissive core function is more frequently applied in high feature dimensional and nonlinear problems, and the parameters to be set are and , which can slightly reduce SVM complexity and improve calculation efficiency; therefore, this study selects the emissive core function.
3.4. Support Vector Machine Recursive Feature Elimination (SVMRFE)
A feature selection process can be used to remove terms in the training dataset that are statistically uncorrelated with the class labels, thus improving both efficiency and accuracy. Pal and Maiti (2010) provided a supervised dimensionality reduction method. The feature selection problem has been modeled as a mixed 01 integer program [23]. Multiclass MahalanobisTaguchi system (MMTS) is developed for simultaneous multiclass classification and feature selection. The important features are identified using the orthogonal arrays and the signaltonoise ratio and are then used to construct a reduced model measurement scale [24]. SVMRFE is an SVMbased feature selection algorithm created by [12]. Using SVMRFE, Guyon et al. selected key and important feature sets. In addition to reducing classification computational time, it can improve the classification accuracy rate [12]. In recent years, many scholars improved the classification effect in medical diagnosis by taking advantage of this method [22, 25].
3.5. Multiclass SVM Classifier
SVM’s basic classification principle is mainly based on dual categories. Presently, there are three main methods, oneagainstall, oneagainstone, and directed acyclic graph, to process multiclass problems [26], described as follows.
(1) OneAgainstAll (OAA). Proposed by Bottou et al., (1994) the oneversusrest converts the classification problem of categories into dualcategory problems [27]. Scholars have also proposed subsequent effective classification methods [28]. In the training process, it must train dualcategory SVMs. When training the th classifier, data in the th category is regarded as “+1” and the data of the remaining categories is regarded as “−1” to complete the training of dualcategory SVM; during the testing process, each testing instance is tested by trained dualcategory SVMs. The classification results can be determined by comparing the outputs of SVM. Regarding unknown category , the decision function can be applied to generate decisionmaking values, and category is the category of the maximum decision making value.
(2) OneAgainstOne (OAO). When there are categories, two categories can produce an SVM; thus, it can produce classifiers and determine the category of the samples by a voting strategy [28]. For example, if there are three categories (1, 2, and 3) and a sample to be classified with an assumed category of 2, the sample will then be input into three SVMs. Each SVM will determine the category of the sample using decision making function and adds 1 to the votes of the category. Finally, the category with the most votes is the category of the sample.
(3) Directed Acyclic Graph (DAG). Similar to OAO method, DAG is to disintegrate the classification problem categories into a dualcategory classification problem [18]. During the training process, it selects any two categories from categories as a group, which it combines into a dualcategory classification SVM; during the testing process, it establishes a dualcategory acyclic graph. The data of an unknown category is tested from the root nodes. In a problem with classes, a rooted binary DAG has leaves labeled by the classes where each of the internal nodes is labeled with an element of a Boolean function [19].
4. Experiment and Results
4.1. Feature Selection Based on SVMRFE
The main purpose of SVMRFE is to compute the ranking weights for all features and sort the features according to weight vectors as the classification basis. SVMRFE is an iteration process of the backward removal of features. Its steps for feature set selection are shown as follows.(1)Use the current dataset to train the classifier.(2)Compute the ranking weights for all features.(3)Delete the feature with the smallest weight.
Implement the iteration process until there is only one feature remaining in the dataset; the implementation result provides a list of features in the order of weight. The algorithm will remove the feature with smallest ranking weight, while retaining the feature variables of significant impact. Finally, the feature variables will be listed in the descending order of explanatory difference degree. SVMRFE’s selection of feature sets can be mainly divided into three steps, namely, the input of the datasets to be classified, calculation of weight of each feature, and the deletion of the feature of minimum weight to obtain the ranking of features. The computational step is shown as follows [12].
(1) Input Training sample: . Category: . The current feature set: . Feature sorted list: .
(2) Feature Sorting Repeat the following process until . To obtain the new training sample matrix according to the remaining features: . Training classifier: . Calculation of weight: . Calculation of sorting standards: . Finding the features of the minimum weight: . Updating feature sorted list: . Removing the features with minimum weight: .
(3) Output: Feature Sorted List . In each loop, the feature with minimum will be removed. The SVM then retrains the remaining features to obtain the new feature sorting. SVMRFE repeatedly implements the process until obtaining a feature sorted list. Through training SVM using the feature subsets of the sorted list and evaluating the subsets using the SVM prediction accuracy, we can obtain the optimum feature subsets.
4.2. SVM Parameters Optimization Based on Taguchi Method
Taguchi Method rises from the engineering technological perspective and its major tools include the orthogonal array and ratio, where ratio and loss function are closely related. A higher ratio indicates fewer losses [29]. Parameter selection is an important step of the construction of the classification model using SVM. The differences in parameter settings can affect classification model stability and accuracy. Hsu and Yu (2012) combined Taguchi method and Staelin method to optimize the SVMbased email spam filtering model and promote spam filtering accuracy [30]. Taguchi parameter design has many advantages. For one, the effect of robustness on quality is great. Robustness reduces variation in parts by reducing the effects of uncontrollable variation. More consistent parts are equal to better quality. Also, the Taguchi method allows for the analysis of many different parameters without a prohibitively high amount of experimentation. It provides the design engineer with a systematic and efficient method for determining near optimum design parameters for performance and cost. Therefore, by using the Taguchi quality parameter design, this study conducts the optimization design of parameters and to enhance the accuracy of SVM classifier on the diagnosis of multiclass diseases.
This study uses the multiclass classification accuracy as the quality attribute of the Taguchi parameter design [21]. In general, when the classification accuracy is higher, it means the accuracy of the classification model is better; that is, the quality attribute is largerthebetter (LTB), and is defined as:
4.3. Evaluation of Classification Accuracy
Crossvalidation measurement divides all the samples into a training set and a testing set. The training set is the learning data of the algorithm to establish the classification rules; the samples of the testing data are used as the testing data to measure the performance of the classification rules. All the samples are randomly divided into folds by category, and the data are mutually repelled. Each fold of the data is used as the testing data and the remaining folds are used as the training set. The step is repeated times, and each testing set validates the classification rules learnt from the corresponding training set to obtain an accuracy rate. The average of the accuracy rates of all testing sets can be used as the final evaluation results. The method is known as fold crossvalidation.
4.4. Results and Discussion
The ranking order of all features for Dermatology and Zoo databases, using RFESVM, is summarized as follows: Dermatology = {V1, V16, V32, V28, V19, V3, V17, V2, V15, V21, V26, V13, V14, V5, V18, V4, V23, V11, V8, V12, V27, V24, V6, V25, V30, V29, V10, V31, V22, V20, V33, V7, V9} and Zoo = {V13, V9, V14, V10, V16, V4, V8, V1, V11, V2, V12, V5, V6, V3, V15, V7}. According to the suggestions of scholars, the classification error rate of OAO is relatively lower when the number of testing instances is below 1000. Multiclass SVM parameter settings can affect the Multiclass SVM’s classification accuracy. ArenasGarcía and PérezCruz applied SVMs’ parameters setting in the multiclass Zoo dataset [31]. They have carried out simulation, using Gaussian kernels, for all possible combinations of and Garmar from and Garmar = sqrt(0.25d), sqrt(0.5d), sqrt(d), sqrt(2d), and sqrt(4d) with d being the dimension of the input data. In this study, we have executed wide ranges of the parameter settings for Dermatology and Zoo databases. Finally, the parameter settings are suggested as Dermatology , Zoo , and the testing accuracies are shown in Table 4.

As shown in Table 4, regarding parameter , when and , the accuracy of the experiment is higher than that of the experimental combination of and ; moreover, regarding parameter , the experimental accuracy rate in the case of and is higher than that of the experimental combination of and . The near optimal value of or may not be the same for different databases. Finding the appropriate parameter settings is important for the performance of classifiers. Practically, it is impossible to simulate every possible combination of parameter settings. And that is the reason why Taguchi methodology is applied to reduce the experimental combinations for SVM. The experimental step used in this study was first referred to the related study, ex, , [31]; then set a possible range for both databases ( 1~100, 1~12). After that, we slightly adjusted the ranges to understand if there will be better results in Taguchi quality engineering parameter optimization for each database. According to our experimental result, the final parameter settings and range 10~100 and 2.4~10, respectively, for Dermatology database; the parameters settings and range 5~50 and 0.08~11, respectively, for Zoo databases. Within the range of Dermatology and Zoo databases parameters and , we select three parameter levels and two control factors, and , to represent parameters and , respectively. The Taguchi orthogonal array experiment selects and the factor level configuration is as illustrated in Table 5.

After data preprocessing, Dermatology and Zoo databases include 358 and 101 testing instances, respectively. The various experiments of the orthogonal array are repeated five times (); the experimental combination and observations are summarized, as shown in Tables 6 and 7. According to (13), we can calculate the ratio for Taguchi experimental combination #1 as The calculation results of the ratios of the remaining eight experimental combinations are summarized, as in Table 6. The Zoo experimental results and ratio calculation are as shown in Table 7. According to the above results, we then calculate the average ratios of the various factor levels. With the experiment of Table 8 as an example, the average ratio of Factor at Level 1 is
 
(, , ; , , ). 
 
(, , ; , , ). 

Similarly, we can calculate the average effects of and from Table 6. The difference analysis results of the various factor levels of Dermatology and Zoo databases are as shown in Table 8. The factor effect diagrams are as shown in Figures 2 and 3. As a greater ratio represents better quality, according to the factor level difference and factor effect diagrams, the Dermatology parameter level combination is ; in other words, parameters , , Zoo parameter level combination is , and the parameter settings are , .
When constructing the Multiclass SVM model using SVMRFE, three different feature sets are selected according to their significance. At the first stage, Taguchi quality engineering is applied to select the optimum values of parameters and . At the second stage, it constructs the Multiclass SVM Classifier and compares the classification performance according to the above parameters. In the Dermatology experiment, Table 9 illustrates the two feature subsets containing 23 and 33 feature variables. The 33 feature sets are tested by SVM and SVM, as based on Taguchi. The parameter settings and testing accuracy rate results are as shown in Table 9. The experimental results, as shown in Figure 4, show that the SVM (, ) testing accuracy rate of the 17feature sets datasets can be higher than 90%, which is better than the accuracy rate of 20feature sets dataset SVM (, ), up to 90%. Moreover, regardless of how many sets of feature variables are selected, the accuracy of SVM (, ) cannot be higher than 90%.

Regarding the Zoo experiment, Table 10 summarizes the experimental test results of sets containing 6, 12, and 16 feature variables using SVM and SVM based on Taguchi. As shown in Table 10, the experimental results show that the classification accuracy rate of the set of 12feature variables in the classification experiment using SVMRFETaguchi (, ) is the highest, up to 97% ± 0.0396. As shown in Figure 5, the experimental results show that the classification accuracy rate of the dataset containing 7 feature variables by SVMRFETaguchi (, ) can be higher than 90%, which can obtain relatively better prediction effects.

5. Conclusions
As the study on the impact of feature selection on the multiclass classification accuracy rate becomes increasingly attractive and significant, this study applies SVMRFE and SVM in the construction of a multiclass classification method in order to establish the classification model. As RFE is a feature selection method of a wrapper model, it requires a previously defined classifier as the assessment rule of feature selection; therefore, SVM is used as the RFE assessment standard to help RFE in the selection of feature sets.
According to the experimental results of this study, with respect to parameter settings, the impact of parameter selection on the construction of SVM classification model is huge. Therefore, this study applies the Taguchi parameter design in determining the parameter range and selection of the optimum parameter combination for SVM classifier, as it is a key factor influencing the classification accuracy. This study also collected the experimental results of using different research methods in the case of Dermatology and Zoo databases [16, 32, 33], as shown in Table 11. By comparison, the proposed method can achieve higher classification accuracy.

Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 N. Cristianini and J. ShaweTaylor, An Introduction to Support Vector Machines and Other Kernelbased Learning Methods, Cambridge University Press, Cambridge, UK, 2000.
 J. Luts, F. Ojeda, R. van de Plas Raf, B. de Moor, S. van Huffel, and J. A. K. Suykens, “A tutorial on support vector machinebased methods for classification problems in chemometrics,” Analytica Chimica Acta, vol. 665, no. 2, pp. 129–145, 2010. View at: Publisher Site  Google Scholar
 M. F. Akay, “Support vector machines combined with feature selection for breast cancer diagnosis,” Expert Systems with Applications, vol. 36, no. 2, pp. 3240–3247, 2009. View at: Publisher Site  Google Scholar
 C.Y. Chang, S.J. Chen, and M.F. Tsai, “Application of supportvectormachinebased method for feature selection and classification of thyroid nodules in ultrasound images,” Pattern Recognition, vol. 43, no. 10, pp. 3494–3506, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 H.L. Chen, B. Yang, J. Liu, and D.Y. Liu, “A support vector machine classifier with rough setbased feature selection for breast cancer diagnosis,” Expert Systems with Applications, vol. 38, no. 7, pp. 9014–9022, 2011. View at: Publisher Site  Google Scholar
 P. Danenas and G. Garsva, “Credit risk evaluation modeling using evolutionary linear SVM classifiers and sliding window approach,” Procedia Computer Science, vol. 9, pp. 1324–1333, 2012. View at: Google Scholar
 C. L. Huang, H. C. Liao, and M. C. Chen, “Prediction model building and feature selection with support vector machines in breast cancer diagnosis,” Expert Systems with Applications, vol. 34, no. 1, pp. 578–587, 2008. View at: Publisher Site  Google Scholar
 H. F. Liau and D. Isa, “Feature selection for support vector machinebased faceiris multimodal biometric system,” Expert Systems with Applications, vol. 38, no. 9, pp. 11105–11111, 2011. View at: Publisher Site  Google Scholar
 Y. Zhang, Z. Chi, and Y. Sun, “A novel multiclass support vector machine based on fuzzy theories,” in Intelligent Computing: International Conference on Intelligent Computing, Part I (ICIC '06), D. S. Huang, K. Li, and G. W. Irwin, Eds., vol. 4113 of Lecture Notes in Computer Science, pp. 42–50, Springer, Berlin, Germany. View at: Publisher Site  Google Scholar
 Y. Aksu, D. J. Miller, G. Kesidis, and Q. X. Yang, “Marginmaximizing feature elimination methods for linear and nonlinear kernelbased discriminant functions,” IEEE Transactions on Neural Networks, vol. 21, no. 5, pp. 701–717, 2010. View at: Publisher Site  Google Scholar
 P. Pudil, J. Novovičová, and J. Kittler, “Floating search methods in feature selection,” Pattern Recognition Letters, vol. 15, no. 11, pp. 1119–1125, 1994. View at: Publisher Site  Google Scholar
 I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002. View at: Publisher Site  Google Scholar
 S. Harikrishna, M. A. H. Farquad, and Shabana, “Credit scoring using support vector machine: a comparative analysis,” in Advanced Materials Research, Trans Tech Publications, Zürich, Switzerland, 2012. View at: Google Scholar
 X. Lin, F. Yang, L. Zhou et al., “A support vector machinerecursive feature elimination feature selection method based on artificial contrast variables and mutual information,” Journal of Chromatography B: Analytical Technologies in the Biomedical and Life Sciences, vol. 10, pp. 149–155, 2012. View at: Publisher Site  Google Scholar
 R. Zhang and M. Jianwen, “Feature selection for hyperspectral data based on recursive support vector machines,” International Journal of Remote Sensing, vol. 30, no. 14, pp. 3669–3677, 2009. View at: Publisher Site  Google Scholar
 Z. X. Xie, Q. H. Hu, and D. R. Yu, “Fuzzy output support vector machines for classification,” in Advances in Natural Computation, L. Wang, K. Chen, and Y. S. Ong, Eds., vol. 3612, pp. 1190–1197, Springer, Berlin, Germany. View at: Google Scholar
 Y. Liu, Z. You, and L. Cao, “A novel and quick SVMbased multiclass classifier,” Pattern Recognition, vol. 39, no. 11, pp. 2258–2264, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. Platt, N. C. Cristianini, and J. ShaweTaylor, “Large margin DAGs for multiclass classification,” in Advances in Neural Information Processing Systems, S. A. Solla, T. K. Leen, and K. R. Muller, Eds., vol. 12, pp. 547–553, 2000. View at: Google Scholar
 Y. Xu, S. Zomer, and R. G. Brereton, “Support vector machines: a recent method for classification in chemometrics,” Critical Reviews in Analytical Chemistry, vol. 36, no. 34, pp. 177–188, 2006. View at: Publisher Site  Google Scholar
 M. L. Huang, Y. H. Hung, and E. J. Lin, “Effects of SVM parameter optimization based on the parameter design of Taguchi method,” International Journal on Artificial Intelligence Tools, vol. 20, no. 3, pp. 563–575, 2011. View at: Publisher Site  Google Scholar
 H.C. Lin, C.T. Su, C.C. Wang, B.H. Chang, and R.C. Juang, “Parameter optimization of continuous sputtering process based on Taguchi methods, neural networks, desirability function, and genetic algorithms,” Expert Systems with Applications, vol. 39, no. 17, pp. 12918–12925, 2012. View at: Publisher Site  Google Scholar
 Y. Mao, D. Pi, Y. Liu, and Y. Sun, “Accelerated recursive feature elimination based on support vector machine for key variable identification,” Chinese Journal of Chemical Engineering, vol. 14, no. 1, pp. 65–72, 2006. View at: Publisher Site  Google Scholar
 A. Pal and J. Maiti, “Development of a hybrid methodology for dimensionality reduction in MahalanobisTaguchi system using Mahalanobis distance and binary particle swarm optimization,” Expert Systems with Applications, vol. 37, no. 2, pp. 1286–1293, 2010. View at: Publisher Site  Google Scholar
 C.T. Su and Y.H. Hsiao, “Multiclass MTS for simultaneous feature selection and classification,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 2, pp. 192–205, 2009. View at: Publisher Site  Google Scholar
 X. Lin, F. Yang, L. Zhou et al., “A support vector machinerecursive feature elimination feature selection method based on artificial contrast variables and mutual information,” Journal of Chromatography B, vol. 910, pp. 149–155, 2012. View at: Publisher Site  Google Scholar
 E. Hüllermeier and S. Vanderlooy, “Combining predictions in pairwise classification: an optimal adaptive voting strategy and its relation to weighted voting,” Pattern Recognition, vol. 43, no. 1, pp. 128–142, 2010. View at: Publisher Site  Google Scholar
 L. Bottou, C. Cortes, J. Denker et al., “Comparison of classifier methods—a case study in handwritten digit recognition,” in Proceedings of the 12th Iapr International Conference on Pattern Recognition, vol. 2, pp. 77–82, IEEE Computer Society Press, Los Alamitos, Calif, USA, 1994. View at: Google Scholar
 J. Furnkranz, “Round robin rule learning,” in Proceedings of the 18th International Conference on Machine Learning (ICML ’01), pp. 146–153, 2001. View at: Google Scholar
 M. R. Sohrabi, S. Jamshidi, and A. Esmaeilifar, “Cloud point extraction for determination of Diazinon: optimization of the effective parameters using Taguchi method,” Chemometrics and Intelligent Laboratory Systems, vol. 110, no. 1, pp. 49–54, 2012. View at: Publisher Site  Google Scholar
 W. C. Hsu and T. Y. Yu, “Support vector machines parameter selection based on combined taguchi method and staelin method for email spam filtering,” International Journal of Engineering and Technology Innovation, vol. 2, no. 2, pp. 113–125, 2012. View at: Google Scholar
 J. ArenasGarcía and F. PérezCruz, “Multiclass support vector machines: A new approach,” in Proceeding of the IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP ’03), vol. 2, pp. 781–784, April 2003. View at: Publisher Site  Google Scholar
 K. G. Srinivasa, K. R. Venugopal, and L. M. Patnaik, “Feature extraction using fuzzy cmeans clustering for data mining systems,” International Journal of Computer Science and Network Security, vol. 6, no. 3A, pp. 230–236, 2006. View at: Google Scholar
 Y. Ren, H. Liu, C. Xue, X. Yao, M. Liu, and B. Fan, “Classification study of skin sensitizers based on support vector machine and linear discriminant analysis,” Analytica Chimica Acta, vol. 572, no. 2, pp. 272–282, 2006. View at: Publisher Site  Google Scholar
 Z. He, Farthestpoint heuristic based initialization methods for Kmodes clustering [thesis], Department of Computer Science and Engineering, Harbin Institute of Technology, Harbin, China, 2006.
 S. Golzari, S. Doraisamy, M. N. Sulaiman, and N. I. Udzir, “Effect of fuzzy resource allocation method on AIRS classifier accuracy,” Journal of Theoretical and Applied Information Technology, vol. 5, no. 1, pp. 18–24, 2009. View at: Google Scholar
Copyright
Copyright © 2014 MeiLing Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.