About this Journal Submit a Manuscript Table of Contents
The Scientific World Journal
Volume 2013 (2013), Article ID 603897, 9 pages
Research Article

Application of the Support Vector Machine to Predict Subclinical Mastitis in Dairy Cattle

1Department of Animal Science, Faculty of Agriculture, Siirt University, 56100 Siirt, Turkey
2Department of Animal Science, Faculty of Agriculture, Selçuk University, 42075 Konya, Turkey

Received 28 August 2013; Accepted 3 October 2013

Academic Editors: T. Aire, N. Dönmez, and T. F. Robinson

Copyright © 2013 Nazira Mammadova and İsmail Keskin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This study presented a potentially useful alternative approach to ascertain the presence of subclinical and clinical mastitis in dairy cows using support vector machine (SVM) techniques. The proposed method detected mastitis in a cross-sectional representative sample of Holstein dairy cattle milked using an automatic milking system. The study used such suspected indicators of mastitis as lactation rank, milk yield, electrical conductivity, average milking duration, and control season as input data. The output variable was somatic cell counts obtained from milk samples collected monthly throughout the 15 months of the control period. Cattle were judged to be healthy or infected based on those somatic cell counts. This study undertook a detailed scrutiny of the SVM methodology, constructing and examining a model which showed 89% sensitivity, 92% specificity, and 50% error in mastitis detection.

1. Introduction

Mastitis is a primary problem facing dairy herd producers. It influences not only the yield and composition of milk but also the well-being of cows [1]. Early detection of subclinical symptoms has been a goal of research for many years. Mastitis significantly impacts the milk industry in Turkey, as elsewhere in the world [2].

Mastitis, a disease of the udder that is typically the result of bacterial infection, leads to substantial economic losses by reducing milk yields [35]. Most mastitis events are subclinical and thus unobserved and untreated, at a level of prevalence which amounts to the submerged part of an iceberg as compared to clinical events. According to Tekeli [6], in herds without an udder health control program, 50% of cows are infected by subclinical mastitis on average, with 2 udder quarters positive for subclinical mastitis. Of mastitis-related economic losses, it is estimated that 20–30% are caused by clinical mastitis, with the remaining losses caused by subclinical mastitis. In 90–95% of cases, the udder and milk appear normal, despite increased somatic cell count (SCC) and decreased milk quality and yield. This long-lasting disease also slows calf development [6].

Various diagnostic tools of mastitis have been proposed, some of which are currently used in the industry [7]. Previously, SCC has been the most widely used measure due to its high correlation with mastitis [710]. Nevertheless, doubts exist as to its usefulness as an indicator of intramammary infection [11]. SCC is affected by age, type, lactation rank, milk yield, anatomical and physiological characteristics of udder stress, season, nutrition, shelter conditions, milking technique, and mastitis [1222]. Electrical conductivity (EC), used to detect changes in milk composition associated with mastitis, is another proven measure which is now used with increasing frequency in the dairy industry. As is well-known, mastitis is associated with increased conductivity of udder tissue, as well as changes in milk’s ionic composition. It is likewise associated with decreased levels of certain mineral substances and increased levels of Na and Cl, all of which increase electrical conductivity [2325]. In modern enterprises that use a computerized herd management system, such valuable quantitative traits as milk yield, milk flow rate, and electrical conductivity are automatically recorded during milking. Such programs flag cows that experience excessive deviation in electrical conductivity as having mastitis. However, these alarms have been observed to be wrong most of the time, so that using EC alone to diagnose mastitis is a suspect practice [2]. Some studies [2528] have determined that EC could be an indicator of subclinical mastitis when conductivity exceeds 5.5 mS/cm.

Early detection of mastitis is of great importance in terms of improving the quality of milk production, eliminating economic losses, and protecting animal welfare. Using a combination of traits could be a better way to ascertain a cow’s health status at any given time. One objective of this research was to use both SCC and EC along with other milk traits to cluster milk samples into two mastitis health categories using SVM. This paper presents a potentially useful SVM-based approach to classify dairy cows into healthy and subclinical mastitis groups.

Vapnik’s [29] SVM discriminates input data between two classes by generating a hyperplane that optimally separates classes after input data have been transformed mathematically into a high-dimensional space. SVM is based on the principle of structural risk minimization and has proven to be especially suitable for high-dimension and small sample-size problems. It has two typical applications: classification and regression. SVM has been applied in such classification problems as pattern recognition, text recognition, and protein classification and has yielded high quality results. As a novel machine learning method, SVM has succeeded in solving such complex problems as support vector regression and face recognition tasks [30, 31].

Recently, SVM has been used to automate disease classification, to improve disease detection methods [32], and to address classification problems in many biomedical fields [33, 34]. The SVM algorithm [29, 35] is a classification algorithm that provides state-of-the-art performance in a wide variety of application domains, including handwriting recognition, object recognition, speaker identification, face detection, and text categorization [36]. To date, SVM has been broadly applied in the field of computational biology to address pattern recognition problems, including protein remote homology detection, microarray gene expression analysis, recognition of translation start sites, functional classification of promoter regions, prediction of protein-protein interactions, and identification of peptides from mass spectrometry data [37]. Many biological problems involve high-dimensional, noisy data, for which SVMs are known to behave well compared to other statistical or machine learning methods. In contrast to most machine learning methods, kernel methods such as SVM can easily handle such nonvector inputs as variable length sequences or graphs. Such data types are common in biology applications, often requiring the engineering of knowledge-based kernel functions. Much of this review consists of explaining these kernels and relating them to one another.

The model for the present study was established using the input data of lactation rank (LR), milk yield (MY), electrical conductivity (EC), average milking duration (AMD), and control season (CS). SCC was calculated for milk samples taken once a month over a 15-month period and was used as a system-output indicator of whether an animal was healthy or had subclinical mastitis. Enterprises with large herds do not typically use SCC to gain immediate indications of animals at risk of subclinical mastitis but instead use the data available in the current computing environment.

Predictions based on this approach were compared to those produced by binary logistic regression models containing the same set of variables. A final goal was to illustrate the applicability of the SVM approach by creating a demonstration web-based classification tool. The SVM model enabled the selection of sets of variables that would yield the best classification of cows into the desired groups.

2. Materials and Methods

To develop and validate the SVM model for classifying dairy cows into healthy and mastitic groups, data were collected from the KARYEM Private Dairy Farm from February 2010 to April 2011 for 170 Holstein Friesian cattle raised in the KARYEM Agricultural Enterprise situated in the Karapınar district (37°47′ North, 33°35′ East, and 994 m altitude) of Konya province.

To maximize profitability, the KARYEM Agricultural Enterprise uses a professional herd management system that immediately records and stores all events and uses this data to predict developments that could create problems, making all necessary measurements and determinations for this purpose and supplying the manager with all the information thus generated. A user can enter information for individual animals, which can also be registered automatically by the system. The program automatically collects such data derived from the automatic milking system as milk yield, average milking duration, and milk electrical conductivity.

Milking at Karyem occurred twice a day, from 03:00 to 06:00 and from 15:00 to 18:00. Milk samples (50 cc) were carefully collected from the automatic milking system with the help of a sampling apparatus once a month from February 2010 until April 2011 with the primary goal of calculating the milk’s somatic cell count (SCC). SCC was calculated in the laboratory via microscope from samples spread over slides and colored. To increase reliability, the count was taken twice for each sample, whereupon the two counts were averaged for data entry.

The SVM algorithm is a new and innovative artificial-intelligence-based method of data mining [35, 38]. SVM can directly distinguish two classes using specific formulas for each. It finds the separator plane that best classifies the data points and separates the two classes of points in the best possible way. In other words, the aim is to find the case of maximal distance between the two classes. The keystone of this classification logic is support vectors, chosen from among training samples that are at the endpoints of both classes.

The goal of SVM modeling is to find the optimal hyperplane that separates clusters of vectors, a set of features, such that data points of one class of the variable are on one side of the plane and those of the other class are on the other. The vectors nearest the hyperplane are the support vectors. In the example of a 2-dimensional case, assume that for classification, the data are from a categorical variable with two classes and there are two prediction variables, and , with continuous values. The data points using the value of on the -axis and on the -axis may be plotted as shown in Figure 1.

Figure 1: Margin between support vectors in two dimensions.

The figure above is an architecture that applies to all methods of machine learning classifier (Figure 2). During the classification process, part of the available data is reserved for training while the rest is kept for testing, because using training data for classifier accuracy prediction produces good results, that is, higher than those of the actual practices. The ratio of these data directly affects the accuracy rate of the classification process (as well as the error rate). Another factor affecting accuracy rate is data distribution. Mathematically, SVM is based on the structure of the formula regardless of distribution. Moreover, in the SVM technique, the classifier model is created as the SVM is trained after first receiving the training data. Then, the output values that the system will calculate for the test data of the target value are determined. Then, SVM classification performance is evaluated according to the rate of difference between these two values.

Figure 2: Architecture of classification via SVM.

Using a training data set, a classification/regression function is set up in SVM. Based on the structural risk minimization principle, SVM focuses on minimizing a bound on the risk function rather than minimizing the error in the training data. Multiple regression models, by contrast, although they have the ability to select specific wavebands, are more useful when there is a linear relationship between the dependent and independent variables, such that SVM is the more appropriate approach for this application.

SVM establishes classification according to the following formula: where is a support vector, is the weight, is the bias, and is a kernel function. In the case of a linear kernel, is the dot product. If , then is classified as a member of group 1; otherwise it is classified as a member of group 2.

The SVM regression model for this study was developed using the LIBSVM, a freely available SVM software library [39].

For the binary logistic regression model, the prediction value for each member of the test data set was estimated using the logistic regression function generated during the training step.

The effectiveness of this approach results in high classification accuracy and very good generalization capability. SVM performs effectual classification by mapping input vectors into a higher-dimensional space and constructing a hyperplane that optimally separates the data in the higher-dimensional space. In training, SVM implements a simple linear mapping or linear classifier together with a prior fixed nonlinear mapping in order to make the data separable. This implementation protects training from the problem of local minima and focuses on function optimization with respect to its generalization ability.

The plot shows what classes each () point belongs to. In this example, data points of one class are in the lower left corner of the plot and data points of the other classes are in the upper right corner. The SVM analysis has found a one-dimensional hyperplane, that is, a line, to separate the data points based on their target classes. Obviously, an infinite number of such lines are possible. The task of SVM is to determine which line is optimal. For this optimization two parallel lines are constructed, one on each side of the separating line, and the two lines are pushed up against the two sets of data points belonging to the two classes. The distance between the separating line and the closest vectors to the line are labeled. The two lines are used to calculate the margin, defined as the distance between the two. In Figure 1, line A is superior to line B. SVM classifiers are also known as maximum margin classifiers.

Mathematically, suppose that the training data pairs with and indicate what class the data point belongs to. In a simple case, a hyperplane can be defined as , where is the adaptable weight vector, is the bias term, and is the vector transpose operator. In training, the data points are projected into the higher-dimensional space, and the classification is conducted through sign [] where is the estimate of . The SVM algorithm searches for a hyperplane that maximizes the margin between the two sets of data points in class 1 and −1.

Further, the margin maximum problem can be solved for any high-dimensional space by introducing a kernel function [38, 40]. A nonlinear kernel function allows the low-dimensional input space to be nonlinearly transformed into a high-dimensional feature space such that the probability that the feature space is linearly separable becomes higher. Theoretically, the kernel function can implicitly map the input space into an arbitrary high-dimensional feature space that can be linearly separable even if the input space may not be. Some commonly used kernel functions are the polynomial, the Gaussian, the Sigmoid, and the RBF.

In general, SVM training solves the following optimization problem: where is a user-defined positive constant; a penalty parameter of the error term, , is the slack variable to measure the degree of misclassification of ; and is the function for mapping data points from the input space to a higher-dimensional feature space.

In (2), data points are mapped into a higher-dimensional space using the function of . Then a linearly separable hyperplane () is found with the maximal margin in the higher-dimensional space. Further, the mapping function is determined with a specified kernel function : If RBF is chosen as the kernel function, then

The RBF kernel is well suited to handling nonlinear classification for SVMs.

This study’s model was evaluated for sensitivity, specificity, and error rate. The day of observation was classified as true positive (TP) if the output was exceeded on a day of mastitis. A nondetected day of mastitis was classified as a false negative (FN). If no alerts were generated, a day in a healthy period was considered a true negative case (TN) and a false positive case (FP) if an alert was given [41].

Sensitivity represents the number of correctly detected days of mastitis out of all days of mastitis:

Specificity indicates the percentage of correctly found healthy days out of all days of health:

The error rate is the percentage of days outside mastitis periods of all the days on which an alarm was produced: where TP, FP, TN, and FN represent the number of true positives, false positives, true negatives, and false negatives, respectively.

3. Results and Discussion

Milk yields ranged between and , electrical conductivity between and , average milking duration between and , and somatic cell count between 19659 and 471809.

Predictions of early stage subclinical mastitis in Holstein cows using the support vector machine (SVM) techniques are given below.

The five data sets investigated in the present study were classified using the SVM algorithm. In training, 60%, 70%, 75%, and 90% of the total data were used, respectively, while the remaining percentage was considered test data for prediction. The model was trained as it was entered into the mastitis detection (MD) system, with five sets of input data (LR, MY, EC, AMD, CS) and one set of output data. The prediction values obtained were compared with target values. The percentages of the true values were calculated, and the average of the percentage values was obtained after these operations were carried out for each output. The operations were carried out in such a way that no data remained unused in the training and testing stages (Table 1).

Table 1: Results of SVM application.

The success rates for application of the 60% training-40% test rate for the 5 data sets were 84%, 85%, 88%, 86%, and 83%, as illustrated in Table 1. The success rates of the application of a 90% training-10% test rate were 86%, 77%, 91%, 69%, and 86%, while the success rates of the 75% training-25% test rate application were 84%, 79%, 91%, 81%, and 83%, and the success rates of the 70% training-30% test rate application were 85%, 83%, 91%, 82%, and 84%. At this stage, averages were found for the combination of the four training tests and a result was derived from these averages.

The MD success percentage of the 3rd test results that were observed to be the best of the training results under SVM was 91%.

Of the 346 data collected for 170 cows in the study, 61 were determined to have subclinical mastitis. A lack of subclinical mastitis cases makes disease diagnosis more difficult for the system.

The sensitivity, specificity, and error values for the SVM model were determined to be 89%, 92%, and 50%, respectively. Based on the MY and AMD data, the success rate of classifying healthy cows and cows with subclinical mastitis was 81% (Figure 3). This success rate was 84% when the classification was made using a linear kernel function (Figure 4).

Figure 3: Classification of cows into healthy and subclinical mastitis groups using SVM with kernel function, based on MY and EC data.
Figure 4: Classification of cows into healthy and subclinical mastitis groups using SVM, based on MY and EC data.

According to the MY and EC data, the success rate of classifying healthy cows and those with subclinical mastitis was 83% (Figure 5). This success rate was 77% when the classification was done using the linear kernel function (Figure 6). The values for the MY and CS versus the MY and LR classification were 83% and 73% as opposed to 82% and 73% (Figures 7, 8, 9, and 10).

Figure 5: Classification of cows into healthy and subclinical mastitis groups using SVM with kernel function, based on MY and AMD data.
Figure 6: Classification of cows into healthy and subclinical mastitis groups using SVM, based on MY and AMD data.
Figure 7: Classification of cows into healthy and subclinical mastitis groups using SVM with kernel function, based on MY and CS data.
Figure 8: Classification of cows into healthy and subclinical mastitis groups using SVM, based on MY and CS data.
Figure 9: Classification of cows into healthy and subclinical mastitis groups using SVM with kernel function, based on MY and LR data.
Figure 10: Classification of cows into healthy and subclinical mastitis groups using SVM, based on MY and LR data.

The binary logistic regression modeling used the same milk traits and milking variables and used subclinical mastitis detection as the outcome variable. First, the logistic regression analysis was performed for the training data set. Then, the estimated coefficients were applied to the test data set to calculate the probability of each individual being a case. The sensitivity of the model was 75%, its specificity 79%, and its error rate 57%.

4. Conclusion

This scheme was an example of the potential use of support vector machine techniques in classifying common diseases in animals. It emerged that the SVM approach can be successfully used to that end and that its classification ability was equivalent to methods now commonly used to detect subclinical mastitis using simple measurements taken by automatic milking systems rather than laboratory tests.

When applied to population health surveys, the SVM technique has the potential to perform better than such traditional statistical methods as logistic regression [42].

The present study’s SVM approach, here applied to a data set from Holstein cows, could easily be applied to other animal populations.

Support vector machine modeling thus shows promise in detecting diseases, including subclinical mastitis, even when it uses the simple variables obtained by automated milking systems.


This research was supported as a doctoral thesis by a grant from The Scientific Research Project Office of Selçuk University, Turkey (Project no. 10201056). The authors wish to thank the staff of KARYEM AŞ, Konya, Turkey.


  1. M. J. Auldist and I. B. Hubble, “Effects of mastitis on raw milk and dairy products,” Australian Journal of Dairy Technology, vol. 53, no. 1, pp. 28–36, 1998. View at Scopus
  2. S. Atasever and H. Erdem, “Relationships between mastitis and electrical conductivity of raw milk in dairy cows,” Anadolu Journal of Agricultural Sciences, vol. 23, no. 2, pp. 131–136, 2008.
  3. J. Duval, “Treating mastitis without antibiotics,” EAP Publication 69, 1969.
  4. O. S. Osteras, V. L. Edge, and S. W. Martin, “Determinants of success or failure in the elimination of major mastitis pathogens in selective dry cow therapy,” Journal of Dairy Science, vol. 82, no. 6, pp. 1221–1231, 1999. View at Publisher · View at Google Scholar
  5. D. J. Wilson, R. N. González, J. Hertl et al., “Effect of clinical mastitis on the lactation curve: a mixed model estimation using daily milk weights,” Journal of Dairy Science, vol. 87, no. 7, pp. 2073–2084, 2004. View at Scopus
  6. T. Tekeli, Mastitis: Quality Milk Production and Somatic Cell Count in the Process of the European Union, Güzeliş Publication Company, Konya, Turkey, 2005.
  7. K. R. S. Rao, “Mastitis milk-physical and chemical tests for detection,” Indian Journal of Dairy & Biosciences, vol. 8, pp. 57–60, 1997.
  8. T. Tekeli, A. Semecan, and M. K. Işık, “Subklinik mastitislerin tanısında pratik bir yöntem,” Hayvancılık Araştırma Dergisi, vol. 3, no. 1, p. 62, 1993.
  9. R. A. Mrode and G. J. T. Swanson, “Genetic and statistical properties of somatic cell count and its suitability as an indirect means of reducing the incidence of mastitis in dairy cattle,” Animal Breeding Abstracts, vol. 64, no. 11, pp. 847–857, 1996.
  10. A. Baştan, M. Kaymaz, M. Fındık, and N. Erünal, “İneklerde subklinik mastitislerin elektriksel iletkenlik, somatik hücre sayısı ve california mastitis test ile saptanması,” Ankara Üniversitesi Veteriner Fakültesi Dergisi, vol. 44, pp. 1–6, 1997. View at Publisher · View at Google Scholar
  11. M. E. Kehrli Jr. and D. E. Shuster, “Factors affecting milk somatic cells and their role in health of the bovine mammary gland,” Journal of Dairy Science, vol. 77, no. 2, pp. 619–627, 1994. View at Scopus
  12. R. F. Raubertas and G. E. Shook, “Relationship between lactation measures of somatic cell concentration and milk yield,” Journal of Dairy Science, vol. 65, no. 3, pp. 419–425, 1982. View at Publisher · View at Google Scholar
  13. G. M. Jones, R. E. Pearson, G. A. Clabaugh, and C. W. Heald, “Relationships between somatic cell counts and milk production,” Journal of Dairy Science, vol. 67, no. 8, pp. 1823–1831, 1984. View at Scopus
  14. S. Göncü and K. Özkütük, “Factors effective at somatic cell count (SCC) in the milk of black and white cows kept in intensive dairy farms at Adana province and their relationships with mastitis,” Journal of Animal Production, vol. 43, no. 2, pp. 44–53, 2002.
  15. İ. Şeker, A. Rişvanlı, S. Kul, M. Bayraktar, and E. Kaygusuzoğlu, “Relationships between CMT scores and udder traits and milk yield in Brown-Swiss cows,” Lalahan Hayvancılık Araştırma Enstitüsü Dergisi, vol. 40, no. 1, pp. 29–38, 2000.
  16. F. Cedden, A. Kor, and S. Keskin, “Somatic cell counts in goat milk during late lactation period and its relationship with milk yield, age and some udder measurements,” Journal of Agricultural Science, vol. 12, no. 2, pp. 63–67, 2002.
  17. A. Rişvanlı and C. Kalkan, “The effect of age and breed on somatic cell count and microbiological isolation rates in milk of dairy cows with subclinical mastitis,” Journal of the Faculty of Veterinary Medicine, vol. 13, no. 1-2, pp. 84–87, 2002.
  18. C. Uzmay, I. Kaya, Y. Akbaş, and A. Kaya, “Effects of udder and teat morphology, parity and lactation stage on subclinical mastitis in holstein cows,” Turkish Journal of Veterinary and Animal Sciences, vol. 27, no. 3, pp. 695–701, 2003. View at Scopus
  19. S. Bademkıran, S. Yeşilmen, and K. Gürbulak, “The effect of daily milking frequency on clinical mastitis and milk yield of dairy cows,” Journal of the Faculty of Veterinary Medicine, vol. 16, no. 2, pp. 17–21, 2005.
  20. E. Eyduran, T. Özdemir, K. Yazgan, and S. Keskin, “The effects of lactation rank and period on somatic cell count (SCC) in milks of Holstein cows,” Journal of the Faculty of Veterinary Medicine, vol. 16, no. 1, pp. 61–65, 2005.
  21. E. Kul, H. Erdem, and S. Atasever, “Effect of different udder traits on mastitis and somatic cell count in dairy cows,” Anadolu Journal of Agricultural Sciences, vol. 21, no. 3, pp. 350–356, 2006.
  22. M. A. de Felicio Porcionato, W. V. B. Soares, C. B. M. dos Reis, C. S. Cortinhas, L. Mestieri, and M. V. dos Santos, “Milk flow, teat morphology and subclinical mastitis prevalence in Gir cows,” Pesquisa Agropecuaria Brasileira, vol. 45, no. 12, pp. 1507–1512, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Nielen, H. Deluyker, Y. H. Schukken, and A. Brand, “Electrical conductivity of milk: measurement, modifiers, and meta analysis of mastitis detection performance,” Journal of Dairy Science, vol. 75, no. 2, pp. 606–614, 1992. View at Scopus
  24. M. Nielen, Y. H. Schukken, A. Brand, S. Haring, and R. T. Ferwerda-van Zonneveld, “Comparison of analysis techniques for on-line detection of clinical mastitis,” Journal of Dairy Science, vol. 78, no. 5, pp. 1050–1061, 1995. View at Scopus
  25. L. I. Ilie, L. Tudor, and A. M. Galis, “The electrical conductivity of cattle milk and the possibility of mastitis diagnosis in Romania,” Medicina Veterinara, vol. 43, no. 2, pp. 220–227, 2010.
  26. E. Norberg, H. Hogeveen, I. R. Korsgaard, N. C. Friggens, K. H. M. N. Sloth, and P. Løvendahl, “Electrical conductivity of milk: ability to predict mastitis status,” Journal of Dairy Science, vol. 87, no. 4, pp. 1099–1107, 2004. View at Scopus
  27. R. M. Bruckmaier, C. E. Ontsouka, and J. W. Blum, “Fractionized milk composition in dairy cows with subclinical mastitis,” Veterinarni Medicina, vol. 49, no. 8, pp. 283–290, 2004. View at Scopus
  28. M. Janzekovic, M. Brus, B. Mursec, P. Vinis, D. Stajnko, and F. Cus, “Mastitis detection based on electric conductivity of milk,” Journal of Achievements in Materials and Manufacturing Engineering, vol. 34, no. 1, pp. 39–46, 2009.
  29. V. N. Vapnik, Statistical Learning Theory: Adaptive and Learning Systems for Signal Processing, Communications, and Control, Wiley, New York, NY, USA, 1998.
  30. B. Schölkopf, P. Bartlett, A. Smola, and R. Williamson, “Support vector regression with automatic accuracy control,” in Proceedings of the International Conference on Artificial Neural Networks (ICANN '98), Perspectives in Neural Computing, pp. 111–116, Springer, Berlin, Germany, 1998. View at Publisher · View at Google Scholar
  31. A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and Computing, vol. 14, no. 3, pp. 199–222, 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. R. C. Thurston, K. A. Matthews, J. Hernandez, and F. De La Torre, “Improving the performance of physiologic hot flash measures with support vector machines,” Psychophysiology, vol. 46, no. 2, pp. 285–292, 2009. View at Publisher · View at Google Scholar · View at Scopus
  33. K. L. S. Ng and S. K. Mishra, “De novo SVM classification of precursor microRNAs from genomic pseudo hairpins using global and intrinsic folding measures,” Bioinformatics, vol. 23, no. 11, pp. 1321–1330, 2007. View at Publisher · View at Google Scholar · View at Scopus
  34. S. B. Rice, G. Nenadic, and B. J. Stapley, “Mining protein function from text using term-based support vector machines,” BMC Bioinformatics, vol. 6, supplement 1, article S22, 2005. View at Publisher · View at Google Scholar · View at Scopus
  35. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “Training algorithm for optimal margin classifiers,” in Proceedings of the 5h Annual ACM Workshop on Computational Learning Theory, pp. 144–152, ACM, New York, NY, USA, July 1992. View at Scopus
  36. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, New York, NY, USA, 2000.
  37. D. C. Anderson, W. Li, D. G. Payan, and W. S. Noble, “A new algorithm for the evaluation of shotgun peptide sequencing in proteomics: support vector machine classification of peptide MS/MS spectra and SEQUEST scores,” Journal of Proteome Research, vol. 2, no. 2, pp. 137–146, 2003. View at Publisher · View at Google Scholar · View at Scopus
  38. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  39. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” 2001, http://www.csie.ntu.edu.tw/~cjlin/libsvm/.
  40. S. Iplikci, “Support vector machines-based generalized predictive control,” International Journal of Robust and Nonlinear Control, vol. 16, no. 17, pp. 843–862, 2006. View at Publisher · View at Google Scholar · View at Scopus
  41. D. Cavero, K.-H. Tölle, C. Henze, C. Buxadé, and J. Krieter, “Mastitis detection in dairy cows by application of neural networks,” Livestock Science, vol. 114, no. 2-3, pp. 280–286, 2008. View at Publisher · View at Google Scholar · View at Scopus
  42. W. Yu, T. Liu, R. Valdez, M. Gwinn, and M. J. Khoury, “Application of support vectormachine modeling for prediction of common diseases: the case of diabetes and pre-diabetes,” BMC Medical Informatics and Decision Making, vol. 10, article 16, 2010. View at Publisher · View at Google Scholar