About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 398232, 17 pages
http://dx.doi.org/10.1155/2012/398232
Research Article

An Optimal Classification Method for Biological and Medical Data

1Institute of Information Management, National Chiao Tung University, Management Building 2, 1001 Ta-Hsueh Road, Hsinchu 300, Taiwan
2Department of Information Management, Chu-Hua University, No. 707, Section 2, WuFu Road, Hsinchu 300, Taiwan
3Department of Information Management, College of Management, Fu Jen Catholic University, No. 510, Jhongjheng Road, Sinjhuang, Taipei 242, Taiwan

Received 25 October 2011; Revised 25 January 2012; Accepted 28 January 2012

Academic Editor: Jung-Fa Tsai

Copyright © 2012 Yao-Huei Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper proposes a union of hyperspheres by the mixed-integer nonlinear program to classify biological and medical datasets. A classifying program with nonlinear terms uses piecewise linearization technique to obtain a global optimum. The numerical examples illustrate that the proposed method can obtain the global optimum more effectively than current methods.

1. Introduction

Classification techniques have been widely applied in the biological and medical research domains [15]. Either objects classification or patterns recognition for biological and medical datasets necessarily demands an optimum accuracy for saving patients’ lives. However, cancer identification with the supervised learning technique does not take a global view in identifying species or predicting survivals. The improvement should cover the whole scope to give implications instead of only considering the efficiency for diagnosis. This research aims to extract features from whole datasets in terms of induction rules.

In the given dataset with several objects, in which each object has some attributes and belongs to a specific class, classification techniques are used to find a rule of attributes that appropriately describes the features of a specified class. The techniques have been studied over the last four decades, including decision tree-based methods [611], hyperplane-based methods [1214], and machine learning-based methods [1417].

To assess the effects of these classifying techniques, three criteria are used for evaluating the quality of inducing rules based on the study of Li and Chen [3].(i)Accuracy. The rule fitting a class should not cover the objects of other classes. The accuracy of a rule should be the higher the better.(ii)Support. A good rule of fitting a class should be supported by most of the objects of the same class.(iii)Compact. A good rule should be expressed in a compact way. That is, the fewer the number of rules, the better the rules are.

This study proposes a novel method to induce rules with high rates of accuracy, support, and compactness based on global optimization techniques, which have become more and more useful in biological and medical researches.

The rest of this paper is organized as follows. Section 2 gives an overview of the related literatures. Two types of mathematical models and a classification algorithm are proposed in Section 3. The numerical examples demonstrate the effectiveness of the proposed method in Section 4. Finally, the main conclusions of this study and future work are drawn in Section 5.

2. Literature Review

Currently, two well-known methods are used to induce classification rules. The first method is the decision tree-based method, which has been developed in the last few decades [610]. It is widely applied to fault isolation of an induction motor [18] to classify normal or tumor tissues [19], skeletal maturity assessment [20], proteomic mass spectra classification [21], and other cases [22, 23]. Although the decision tree-based method assumes that all classes can be separated by linear operations, the inducing rules will suffer if the boundaries between the classes are nonlinear. In fact, the linearity assumption prohibits practical applications because many biological and medical datasets have complicated nonlinear interactions between attributes and predicted classes.

Consider the classification problem with two attributes as shown in Figure 1, where “○” represents a first-class object, and “●” represents a second-class object. Figure 1 depicts a situation in which a nonlinear relationship exists between the objects of two classes. Decision tree method focuses on inducing classification rules for the objects, as shown in Figure 1(b), in which the decision tree method requires four rectangular regions to classify the objects.

fig1
Figure 1: Classifying the objects of two classes.

The second is the support vector hyperplane method, which conducts feature selection and rule extraction from the gene expression data of cancer tissue [24]; it is also applied in other applications [1214, 25]. The technique separates observations of different classes by multiple hyperplanes. As the number of decision variables is required to express the relationship between each training datum and hyperplane, and the separating hyperplane is assumed a nonlinear programming problem, the training speed becomes slow for a large number of training data. Additionally, similar hypersphere support vector methods have been developed by Lin et al. [26], Wang et al. [27], Gu and Wu [28], and Hifi and M’Hallah [29] for classifying objects. In classification algorithms, they partition the sample space using the sphere-structured support vector machine [14, 30]. However, these methods need to form a classification problem as a nonlinear nonconvex program, which makes reaching an optimal solution difficult. Taking Figure 1 as an example, a hyperplane-based method requires four hyperplanes to discriminate the objects, as shown in Figure 2.

398232.fig.002
Figure 2: Classify by hyperplane method method.

As previously mentioned, many biological and medical datasets have complicated boundaries between attributes and classes. Both decision tree-based methods and hyperplane-based methods find only the rules with high accuracy, which either cover only a narrow part of the objects or require numerous attributes to explain a classification rule. Although these methods are computationally effective for deducing the classifications rules, they have two limitations as follows.(i)Decision tree-based methods are heuristic approaches that can only induce feasible rules. Moreover, decision tree-based methods split the data into hyperrectangular regions using a single variable, which may generate a large number of branches (i.e., low rates of compactness).(ii)Hyperplane-based methods use numerous hyperplanes to separate objects of different classes and divide the objects in a dataset into indistinct groups. The method may generate a large number of hyperplanes and associated rules with low rates of compactness.

Therefore, this study proposes a novel hypersphere method to induce classification rules based on a piecewise linearization technique. The technique reformulates the original hypersphere model by a piecewise linearization approach using a number of binary variables and constraints in the number of piecewise line segments. As the number of break points used in the linearization process increases, the error in linear approximation decreases, and an approximately global optimal solution of the hypersphere model can be obtained. That is, the proposed method is an optimization approach that can find the optimal rules with a high rate of accuracy, support, and compactness. The concept of the hypersphere method is depicted in Figure 3, in which only one circle is required to classify the objects. All objects of class “●” are covered by a circle, and those not covered by this circle belong to class “○.”

398232.fig.003
Figure 3: Classify by hypersphere.

3. The Proposed Models and Algorithm

As the classification rules directly affect the rates of accuracy, support, and compactness, we formulate two models to determine the highest accuracy rate and support rate, respectively. To facilitate the discussion, the related notations are introduced first:: attribute value of the object,: center value of the hypersphere for class ,:radius of the hypersphere for class ,:number of objects for class ,: object belonging to class ,:number of attributes,:a rule describing class .

Based on these notations, we propose two types of classification models as follow.

3.1. Two Types of Classification Models

Considering the object and hypersphere , and normalizing (i.e., to express its scale easily), we then have the following three notations.

Notation 1. Normalization rescales all as . The following is a normalizing formula: where , is the largest value of attribute , and is the smallest value of attribute .

Notation 2. A general form for expressing an object is written as where is the class index of object .

Notation 3. A general form for expressing a hypersphere is written as where is the k’th hypersphere for class .
We use two and three dimensions (i.e., two attributes and three attributes) as visualizations to depict clearly a circle and a sphere, respectively (Figure 4). Figure 4(a) denotes the centroid of the circle as and the radius of the circle as . They are extended to three dimensions called sphere (Figure 4(b)); in dimensions (i.e., attributes), , which are then called hyperspheres.
To find each center and the radius of the hypersphere, the following two nonlinear models are considered. The first model looks for a support rate as high as possible while the accuracy rate is fixed to 1, as shown in Model 1.

fig4
Figure 4: The concept of hypersphere method.

Model 1. One has the following: where and are the two sets for all objects expressed, respectively, by
Referring to Li and Chen [3], the rates of accuracy and support of in Model 1 can be specified by the following definitions.

Definition 3.1. The accuracy rate of a rule for Model 1 is .

Definition 3.2. The support rate of a rule for Model 1 is specified as follows.(i)If for all belonging to class , then ; otherwise , where indicates the hypersphere set for class .(ii) where indicates the number of objects belonging to class .
The second model looks for an accuracy rate as high as possible while the support rate is fixed to 1, as shown in Model 2.

Model 2. One has the following: where and are the two sets expressed by (3.5) and (3.6), respectively.
Similarly, the rates of accuracy and support of in Model 2 can be considered as follows.

Definition 3.3. The accuracy rate of a rule of Model 2 is denoted as and is specified as follows.(i)If belongs to class , then for all ; otherwise, , where represents the hypersphere set for class .(ii) where represents the number of total objects covered by .

Definition 3.4. The support rate of a rule of Model 2 is denoted as , and .

Definition 3.5. The compactness rate of a set of rules , denoted as , is expressed as follows: where means the number of hyperspheres and unions of hyperspheres for class .A union of hyperspheres indicates that the object is covered by different hyperspheres, as shown in Figure 5. Take Figure 5 for an example, in which there are two classes. The objects of class “○” are covered by two unions of the circles (i.e., and ), and the objects of class “●” are covered by one circle (i.e., ). Therefore, , , and .
Moreover, Models 1 and 2 are separable nonlinear programs solvable to find an optimal solution by linearizing the quadratic terms . The piecewise linearization technique is discussed as follows.

398232.fig.005
Figure 5: Classify by hypersphere method.

Proposition 3.6 (referring to Beale and Forrest [31]). Denote approximate function as a piecewise linear function (i.e., linear convex combination) of , where , represents the break points of . is expressed as follows: where , and (3.13) is a special-ordered set of type 2 (SOS2) constraint (reference to Beale and Forrest [31]).
Note that the SOS2 constraint is a set of variables in which at most two variables may be nonzero. If two variables are nonzero, they must be adjacent in the set.

Notation 4. According to Proposition 3.6, let . is linearized by the Proposition 3.6 and is expressed as .

3.2. Solution Algorithm

A proposed algorithm is also presented to seek the highest accuracy rate or the highest support rate, as described as follows.

Algorithm 3.7. Step 1. Normalize all attributes (i.e., rescale to be ).Step 2. Initialization: and .Step 3. Solve Model 1 (or Model 2) to obtain the k′th hypersphere of class . Remove the objects covered by from the dataset temporarily.Step 4. Let , and resolve Model 1 (or Model 2) until all objects in class are assigned to the hyperspheres of same class.Step 5. Let k = 1 and t = t + 1, and reiterate Step 3 until all classes are processed.Step 6. Check the independent hyperspheres and unions of hyperspheres in the same class t.Step 7. Calculate and record the number of independent hyperspheres and unions of hyperspheres in , and iterate until all classes are done.According to this algorithm, we can obtain the optimal rules to classify objects most efficiently. The process of the algorithm is depicted in Figure 6.

398232.fig.006
Figure 6: Flowchart of the proposed algorithm.
3.3. Operation of a Simple Example

Consider a dataset in Table 1 as an example, which has object , two attributes , and an index of classes for . The dataset is expressed as . There are the domain values of . As there are only two attributes, these 15 objects can be plotted on a two-dimensional space after normalizing them, as shown in Figure 7(a).

tab1
Table 1: Dataset of Example 1.
fig7
Figure 7: Visualization of Example 1.

This example can be solved by the proposed algorithm as follows.

Step 1. Normalize all attributes (i.e., and ).

Step 2. Initialization: t = 1 and k = 1.

Step 3. The classification model (i.e., Model 1) is linearly formulated as follows: where and . The optimal solution of the = (0.047,0.265,0.15749) for , where covers objects 1–4. We then temporarily remove these objects covered by .

Step 4. k = k + 1: the optimal solution of the = (0.736,0.5,0.0138) for , where covers objects 5-6. Class 1 is then done.

Step 5. As , , and Steps 3 and 4 are iterated, we then, respectively, have optimal solutions for as follows. The results are shown in Figure 7(b).(i), where covers objects 7–9.(ii), where covers objects 10-11.(iii), where covers objects 12–14.

Step 6. Check and calculate the unions of hypersphere for all in class (i.e., Initial ).

Step 7. As , mark the number of unions of class into and iterate Step 6 until .

4. Numerical Examples

This study shows how the experimental results evaluate the performance, including accuracy, support, and compactness rates, and compares the proposed model with different methods using CPLEX [32]. All tests were run on a PC equipped with an Intel Pentium D 2.8 GHz CPU and 2 GMB RAM. Three datasets were tested in our experiments as follows:(i)Iris Flower dataset introduced by Sir Ronald Aylmer Fisher (1936),(ii)European barn swallow (Hirundo rustica) dataset obtained by trapping individual swallows in Stirlingshire, Scotland, between May and July 1997 [1, 3],(iii)the highly selective vagotomy (HSV) patient dataset of F. Raszeja Memorial Hospital in Poland [3, 33, 34].

4.1. Iris Flower Dataset

The Iris Flower dataset contains 150 objects. Each object is described by four attributes (i.e., sepal length, sepal width, petal length, and petal width) and is classified by one of three classes (i.e., setosa, versicolor, and virginica). By solving the proposed method, we induced six hyperspheres (i.e., , , and ). The induced classification rules are reported in Table 2. Table 2 also lists a hypersphere and two unions (i.e., , , and ) of hyperspheres with centroid points and radii.

tab2
Table 2: Centroid points for the Iris data set by the proposed method.

Rule in Table 2 contains a hypersphere , which implies that(i)“if , then object belongs to class 1.”

Rule in Table 2 contains a union of three hyperspheres (i.e., ) which implies that(i)“if , then object belongs to class 2,” or(ii)“if , then object belongs to class 2,” or(iii)“if , then object belongs to class 2.”

Rule in Table 2 contains a union of two hyperspheres (i.e., ), which implies that(i)“if , then object belongs to class 3,” or(ii)“if , then object belongs to class 3.”

Comparing the proposed method with both decision tree [3] and hyperplane methods [35] in deducing the classification rules for the Iris Flower dataset, Table 3 lists the experimental result.

tab3
Table 3: Comparing results for the Iris flower data set (R1,R2,R3).

The accuracy rates of (,,) in the proposed method are (1,1,1), as Model 1 has been solved. This finding indicates that none of objects in class 2 or class 3 are covered by , none of objects in classes 1 or 3 are covered by , and none of the objects in classes 1 or 2 are covered by . The support rate of (,,) in the proposed method is (1,0.98,0.98), indicating that all objects in class 1 are covered by , 98% of the objects in class 2 are covered by , , and , and 98% of the objects in class 3 are covered by and . The compactness rate of rules , , and is computed as . Finally, we determine the following.(i)Although all three methods perform very well in the rates of accuracy and support, the proposed method has the best performance for the accuracy of classes 2 and 3 (i.e., and ).(ii)The proposed method has the best compactness rate.

4.2. Swallow Dataset

The European barn swallow (Hirundo rustica) dataset was obtained by trapping individual swallows in Stirlingshire, Scotland, between May and July 1997. This dataset contains 69 swallows. Each object is described by eight attributes, and it belongs to one of two classes (i.e., the birds are classified by the gender of individual birds).

Here, we also used Model 1 to induce the classification rules. Table 4 lists the optimal solutions (i.e., centroid and radius) for both rules and .

tab4
Table 4: Centroid points for the Swallow data set by the proposed method.

The result of the decision tree method, which is referred to in Li and Chen [3], is listed in Table 5, where , , and .

tab5
Table 5: Comparing results for the Swallow data set (,).

The result of the hyperplane method, referred to in Chang and Lin [35], is also listed in Table 5, where, , and .

We compared the three methods in Table 5 to show that the proposed method can induce rules with better or equivalent values of and . In fact, the proposed method also has the best compactness rate.

4.3. HSV Dataset

The HSV dataset contains 122 patients classified into four classes, with each patient having 11 preoperating attributes. To maximize the support rate with respect to the proposed method (i.e., Model 1), the proposed method generated seven hyperspheres and three unions of hyperspheres. The centroids and radii of the hyperspheres are reported in Table 6, and a comparison with other methods is reported in Table 7.

tab6
Table 6: Centroid points for the HSV data by the proposed method.
tab7
Table 7: Comparing results for the HSV data set (R1,R2,R3,R4).

Using the decision tree method in the HSV dataset generates 24 rules. In addition, the hyperplane method deduces 45 hyperplanes for the HSV dataset. Table 7 also shows that the proposed method can find rules with the highest rates (i.e., , , and ) compared with the other two methods.

4.4. Limitation of the Proposed Method

The hypersphere models are solved by one of the most powerful mixed-integer program software CPLEX [32] running in a PC. Based on optimization technique, the results of the numerical examples illustrate that the usefulness of the proposed method is better than that of the current methods, including the decision tree method and the hyperplane support vector method. As the solving time of the hypersphere model, which is linearized, mainly depends on the number of binary variables and constraints, solving the reformulated hypersphere model from the proposed algorithm takes about one minute for each dataset (i.e., in Sections 4.1 and 4.3), in which using eight piecewise line segments linearizes the nonlinear nonconvex term (i.e., ) of Model 1.

The computing time for solving a linearized hypersphere program grows rapidly as the numbers of binary variables and constraints increase. Also, the computing time of the proposed method is slower than that of the decision tree method and hyperplane method, especially for large datasets or a great number of piecewise line segments. In the further study, utilizing a mainframe-version optimization software [3638], integrating meta-heuristic algorithms, or using distributed computing techniques can enhance solving speed to conquer this problem.

5. Conclusions and Future Work

This study proposes a novel method for deducing classification rules, which can find the optimal solution based on a hypersphere domain. The optimization technique for finding classification rules is approached to optimal. Results of the numerical examples illustrate that the usefulness of the proposed method is better than that of the current methods, including the decision tree method and the hyperplane method. The proposed method is guaranteed to find an optimal rule, but the computational complexity grows rapidly by increasing the problem size. More investigation and research are required to enhance further the computational efficiency of globally solving large-scale classification problems, such as running mainframe-version optimization software, integrating meta-heuristic algorithms, or using distributed computing techniques.

Acknowledgments

The authors wish to thank the editor and the anonymous referees for providing insightful comments and suggestions, which have helped them improving the quality of the paper. This work was supported by the National Science Council of Taiwan under Grants NSC 100-2811-E-009-040-, NSC 99-2221-E-030-005-, and NSC 100-2221-E-030-009-.

References

  1. M. J. Beynon and K. L. Buchanan, “An illustration of variable precision rough set theory: the gender classification of the European barn swallow (Hirundo rustica),” Bulletin of Mathematical Biology, vol. 65, no. 5, pp. 835–858, 2003. View at Publisher · View at Google Scholar · View at Scopus
  2. H. L. Li and C. J. Fu, “A linear programming approach for identifying a consensus sequence on DNA sequences,” Bioinformatics, vol. 21, no. 9, pp. 1838–1845, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. H. L. Li and M. H. Chen, “Induction of multiple criteria optimal classification rules for biological and medical data,” Computers in Biology and Medicine, vol. 38, no. 1, pp. 42–52, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. C. W. Chu, G. S. Liang, and C. T. Liao, “Controlling inventory by combining ABC analysis and fuzzy classification,” Computers and Industrial Engineering, vol. 55, no. 4, pp. 841–851, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. J.-X. Chen, “Peer-estimation for multiple criteria ABC inventory classification,” Computers & Operations Research, vol. 38, no. 12, pp. 1784–1791, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. E. B. Hunt, J. Marin, and P. J. Stone, Experiments in Induction, Academic Press, New York, NY, USA, 1966.
  7. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees, Wadsworth Statistics/Probability Series, Wadsworth Advanced Books and Software, Belmont, Calif, USA, 1984.
  8. J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986. View at Publisher · View at Google Scholar · View at Scopus
  9. J. R. Quinlan, “Simplifying decision trees,” International Journal of Man-Machine Studies, vol. 27, no. 3, pp. 221–234, 1987. View at Scopus
  10. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufman, San Mateo, Calif, USA, 1993.
  11. H. Kim and W. Y. Loh, “Classification Trees with Unbiased Multiway Splits,” Journal of the American Statistical Association, vol. 96, no. 454, pp. 589–604, 2001. View at Scopus
  12. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
  13. R. M. Rifkin, Everything old is new again: a fresh look at historical approaches in machine learning, Ph.D. thesis, Massachusetts Institute of Technology, Ann Arbor, MI, USA, 2002.
  14. S. Katagiri and S. Abe, “Incremental training of support vector machines using hyperspheres,” Pattern Recognition Letters, vol. 27, no. 13, pp. 1495–1507, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Scopus
  16. C. H. Chu and D. Widjaja, “Neural network system for forecasting method selection,” Decision Support Systems, vol. 12, no. 1, pp. 13–24, 1994. View at Scopus
  17. C. Giraud-Carrier, R. Vilalta, and P. Brazdil, “Guest editorial: introduction to the special issue on meta-learning,” Machine Learning, vol. 54, no. 3, pp. 187–193, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Pomorski and P. B. Perche, “Inductive learning of decision trees: application to fault isolation of an induction motor,” Engineering Applications of Artificial Intelligence, vol. 14, no. 2, pp. 155–166, 2001. View at Publisher · View at Google Scholar · View at Scopus
  19. H. Zhang, C. N. Yu, B. Singer, and M. Xiong, “Recursive partitioning for tumor classification with gene expression microarray data,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 12, pp. 6730–6735, 2001. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Aja-Fernández, R. De Luis-García, M. Á. Martín-Fernández, and C. Alberola-López, “A computational TW3 classifier for skeletal maturity assessment. A Computing with Words approach,” Journal of Biomedical Informatics, vol. 37, no. 2, pp. 99–107, 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. P. Geurts, M. Fillet, D. de Seny et al., “Proteomic mass spectra classification using decision tree based ensemble methods,” Bioinformatics, vol. 21, no. 14, pp. 3138–3145, 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. C. Olaru and L. Wehenkel, “A complete fuzzy decision tree technique,” Fuzzy Sets and Systems, vol. 138, no. 2, pp. 221–254, 2003. View at Publisher · View at Google Scholar
  23. M. Pal and P. M. Mather, “An assessment of the effectiveness of decision tree methods for land cover classification,” Remote Sensing of Environment, vol. 86, no. 4, pp. 554–565, 2003. View at Publisher · View at Google Scholar · View at Scopus
  24. Z. Chen, J. Li, and L. Wei, “A multiple kernel support vector machine scheme for feature selection and rule extraction from gene expression data of cancer tissue,” Artificial Intelligence in Medicine, vol. 41, no. 2, pp. 161–175, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. H. Nunez, C. Angulo, and A. Catala, “Rule-extraction from support vector machines,” in Proceedings of the European Symposium on Artificial Neural Networks, pp. 107–112, 2002.
  26. Y. M. Lin, X. Wang, W. W. Y. Ng, Q. Chang, D. S. Yeung, and X. L. Wang, “Sphere classification for ambiguous data,” in Proceedings of the International Conference on Machine Learning and Cybernetics, pp. 2571–2574, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  27. J. Wang, P. Neskovic, and L. N. Cooper, “Bayes classification based on minimum bounding spheres,” Neurocomputing, vol. 70, no. 4–6, pp. 801–808, 2007. View at Publisher · View at Google Scholar · View at Scopus
  28. L. Gu and H. Z. Wu, “A kernel-based fuzzy greedy multiple hyperspheres covering algorithm for pattern classification,” Neurocomputing, vol. 72, no. 1–3, pp. 313–320, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Hifi and R. M'Hallah, “A literature review on circle and sphere packing problems: models and methodologies,” Advances in Operations Research, vol. 2009, Article ID 150624, 22 pages, 2009. View at Publisher · View at Google Scholar
  30. M. Zhu, Y. Wang, S. Chen, and X. Liu, “Sphere-structured support vector machines for multi-class pattern recognition,” in Proceedings of the 9th International Conference (RSFDGrC '03), vol. 2639 of Lecture Notes in Computer Science, pp. 589–593, May 2003. View at Scopus
  31. E. M. L. Beale and J. J. H. Forrest, “Global optimization using special ordered sets,” Mathematical Programming, vol. 10, no. 1, pp. 52–69, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  32. IBM/ILOG, “Cplex 12.0 reference manual,” 2010, http://www.ilog.com/products/cplex/.
  33. D. C. Dunn, W. E. G. Thomas, and J. O. Hunter, “An evaluation of highly selective vagotomy in the treatment of chronic duodenal ulcer,” Surgery Gynecology and Obstetrics, vol. 150, no. 6, pp. 845–849, 1980. View at Scopus
  34. K. Slowinski, “Rough classification of HSV patients,” in Intelligent Decision Support-Handbook of Applications and Advances of the Rough Sets Theory, R. Slowinski, Ed., pp. 77–94, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1992.
  35. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” 2010, http://www.csie.ntu.edu.tw/~cjlin/libsvm/index.html.
  36. C. Than, R. Sugino, H. Innan, and L. Nakhleh, “Efficient inference of bacterial strain trees from genome-scale multilocus data,” Bioinformatics, vol. 24, no. 13, pp. i123–i131, 2008. View at Publisher · View at Google Scholar · View at Scopus
  37. M. Rockville, “Large scale computing and storage requirements for biological and environmental research,” Tech. Rep., Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, Calif, USA, 2009.
  38. X. Xie, X. Fang, S. Hu, and D. Wu, “Evolution of supercomputers,” Frontiers of Computer Science in China, vol. 4, no. 4, pp. 428–436, 2010. View at Publisher · View at Google Scholar · View at Scopus