Computational and Mathematical Methods in Medicine

Computational and Mathematical Methods in Medicine / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 586928 | 10 pages |

Nominated Texture Based Cervical Cancer Classification

Academic Editor: Yu Xue
Received08 Sep 2014
Revised18 Dec 2014
Accepted19 Dec 2014
Published14 Jan 2015


Accurate classification of Pap smear images becomes the challenging task in medical image processing. This can be improved in two ways. One way is by selecting suitable well defined specific features and the other is by selecting the best classifier. This paper presents a nominated texture based cervical cancer (NTCC) classification system which classifies the Pap smear images into any one of the seven classes. This can be achieved by extracting well defined texture features and selecting best classifier. Seven sets of texture features (24 features) are extracted which include relative size of nucleus and cytoplasm, dynamic range and first four moments of intensities of nucleus and cytoplasm, relative displacement of nucleus within the cytoplasm, gray level cooccurrence matrix, local binary pattern histogram, tamura features, and edge orientation histogram. Few types of support vector machine (SVM) and neural network (NN) classifiers are used for the classification. The performance of the NTCC algorithm is tested and compared to other algorithms on public image database of Herlev University Hospital, Denmark, with 917 Pap smear images. The output of SVM is found to be best for the most of the classes and better results for the remaining classes.

1. Introduction

Cervical cancer is one of the most common cancers affecting women worldwide and the most common in developing countries [1]. It can be cured, if it is detected in early stages and identify in which stage it belongs and further proper treatment has given in time. At the same time the occurrence and the death rate remains even high in the developing and under developed regions of the world. It is reported that annually 132,000 new cases were diagnosed and 74,000 deaths in India, which is nearly one-third of the global cancer deaths [2]. Screening of cervical cancer can be done by Pap test, which is believed to be the gold standard forever. Due to the subjective disparity of different cytologists, the screening results show more of inconsistencies [3]. The test output shows more of false positive and false negative results, which make the reliability of the screening process a question mark [4]. Also in manual cervical screening process, hundreds of images are analyzed daily; the classification of cells become tough and the possibility of human errors become high.

Many automatic and semiautomatic methods have been proposed in various times to detect various stages of cervical cancer. Many of these methods were not supported in achieving the objectives of providing measured variables which could eliminate the interpretation errors and interobserver discrepancy [5]. Pap smear images are rich in various features like color, shape, and texture. The process of accurate extraction of unique visual features from these images would very well help in developing an automated screening device. This can be achieved only through texture feature than others since all the cellular changes are observed only using these features. Since the texture parameters are the simple mathematical representations like smooth, rough, and grainy, then the analysis becomes easier [6]. Analyzing all the above issues, two important challenges are considered. First, selection of unique texture features suitable for classification. Second, selection of the most efficient and scalable classifier improves the accuracy more.

Plissiti et al. [7] have developed the fully automated method to detect the nucleus in Pap smear images. Using morphological analysis, the nuclei centroids are detected and by applying distance dependent rule and classification algorithms on resulted centroids, the undesirable artifacts were removed from the cell.

By considering nucleus as the most informative region of the cell, Sobrevilla et al. [8] have been presented an algorithm for automatic nuclei detection of cytology cell. This algorithm combines color, cytopathologists knowledge, and fuzzy systems which show high performance and more computational speed. Harandi et al. [9] have developed a system for the detection of cytoplasm and nucleus from ThinPrep images. The geometric active contours were used as the segmentation tool. In this method, localization of cell objects were done in low resolution and boundary detection of cytoplasm and nucleus were done in high resolution. Bergmeir et al. [10] have developed an algorithm used to detect cell nuclei and cytoplasm. This algorithm used the combination of voting scheme and prior knowledge to locate the cell nuclei and elastic segmentation to determine the shape of nucleus. The noise is removed with mean-shift and median filters, and edges were extracted with canny edge detection algorithm.

Most of the segmentation methods discussed in this literature focused on nucleus and cytoplasm extraction, which requires higher contrast around the boundaries of nucleus. The heavily stained cervical smears, overlapping of cell images and blurred images to overexposing or underexposing of light in microscope even cause difficulties in segmentation [11].

The automatic classification of Pap smear images focuses on marking of single cells into any one of binary classes (normal and abnormal) or multiple classes (based on severity). Multiple classification of Pap smear images became popular when CIN based classification on cervical cytology images was proposed. Holmquist et al. [12] developed a binary classification method to distinguish between normal and abnormal cells. The dual wavelength method was used for the automatic isolation of nucleus from cytoplasm. The classification procedures were done based on the extraction of density-oriented, shape-oriented, and texture-oriented parameters. Chou and Shapiro [13] have proposed a method used hierarchical multiple classifier scheme. This method used graph-theoretic clustering algorithm to group the training data, component classifiers as the inputs to a super-classifier, and subclass labeling is used to improve the classification accuracy. Marinakis et al. [14] proposed a metaheuristic approach to classify the Pap smear cells. Uniquely described twenty features are extracted from each cell image and classified as normal and abnormal type. The genetic algorithm is used to find the best possible performing subset selection.

Most of the medical image classification methods supervised learning algorithms which challenges to find out the connection between the independent and dependent variables. Support vector machine (SVM) and neural network (NN) are the most promising methods of this category. Support vector machine (SVM) is a supervised classification method introduced by Cortes and Vapnik [15] in 1992. SVM plays vital role in cytology image analysis including yeast cells on suspension in bioreactors [16], cells in culture using microscopy [17], and cells on sections of brain tumors [18]. NN classifiers are based on statistical probabilities in making classification decisions [19]. NN uses the training set which contains inputs, outputs, and the learning rules.

Chen et al. [20] have developed an algorithm for segmenting nucleus and cytoplasm counters. This system classifies the Pap smear cells into anyone of four different types of classes using SVM. Two experiments were conducted to validate the classification performance which showed the best performance outputs. Mat-Isa et al. [21] have proposed an automatic cervical cancer diagnostic system based on hierarchical hybrid multilayered perceptron network. In this method, region-growing-based algorithm is used for the feature extraction procedure and classification is done by NN.

The above literatures clearly show that none of the single texture feature is suitable in improving classification accuracy. The necessity of scalable and cost effective algorithm is required to improve the efficiency of the classification system. Our classification method NTCC classifies the Pap smear cells into anyone of seven stages, based on the work done by researchers on the public database of Herlev University Hospital, Denmark [22, 23]. The detailed description of Pap smear cells is shown in Table 1. By carefully extracting 24 features, the images are classified into any one of seven classes, namely, superficial squamous, intermediate squamous, columnar, mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma in situ. Already many researchers have contributed various classification algorithms based on this public database [14, 24].

ClassCategoryCell typeCell countSubtotal

1NormalNormal squamous74242
2NormalIntermediate squamous70

4AbnormalMild dysplasia182675
5AbnormalModerate dysplasia146
6AbnormalSevere dysplasia197
7AbnormalCarcinoma in situ150

2. Nominated Texture Based Cervical Cancer (NTCC) Classification System

The NTCC classification system consists of texture feature extractor, SVM trainer, SVM classifier, and the database to store the trained and stored database. The architecture of the proposed classification system is depicted in Figure 1. The system involves the following steps: preparation of Pap smear images and preprocessing, segmentation of nucleus and cytoplasm, extraction of texture features, and classification.

2.1. Preparation and Preprocessing of Pap Smear Images

The cytology images are acquired through a powerful microscope by the skilled cytotechnicians. All images were captured with a resolution of μm/pixel from the public database of cervical cancer, Herlev University Hospital, Denmark [23]. The purpose of the preprocessing step is to suppress the unwanted noise found in cervical image samples and enhances them for further processing. In general the nucleus region of the cervical cytology cell has larger distribution of darker pixel than cytoplasm. In this step the input images are first inverted, and then the image binarization followed by morphological closing operation with structuring element of five has been performed. The rough segmentation of cell nucleus can be done through morphological filling operation.

2.2. Feature Extraction and Selection

Feature selection chooses the optimum subset of features from the enormous set of potentially useful features which may be available in a given problem domain. By selecting the precise number of features, it is able to reduce the storage space and computational time, which will certainly improve the performance [25, 26]. Also too many dimensions in the feature space may drastically increase the computational complexity and deteriorate the discriminating power of the feature set due to the distortion and noise [27].

In cervical cancer classification system seven sets of features are extracted. They are relative size of nucleus and cytoplasm, dynamic range and first four moments of intensities of nucleus and cytoplasm, relative displacement of nucleus within the cytoplasm, gray level cooccurrence matrix features, local binary pattern histogram, tamura features, and edge orientation histogram which includes the total features of 24.

The size of nucleus and cytoplasm plays a vital role in classifying the cervical cell type: where Nucleusarea is the proportion of number of pixels of nucleus (N) and Cytoplasmarea is the proportion of number of pixels of cytoplasm (C).

Dynamic range of an image and the first four moments of the intensities of nucleus and cytoplasm provides four dissimilar statistical moments. They are calculated by individual values of image pixels and not on the cooccurrence of neighboring pixel values. Dynamic range (dr) of an image is the difference between the intensity values of the brightest and the darkest pixel in the image. The four moments are mean (), variance , skewness , and Kurtosis as follows [28]:

The movements position of nucleus inside the cytoplasm help in classifying stages. This can be achieved by extracting the relative displacement of nucleus within the cytoplasm: where Cytocentroid is the centroid of cytoplasm and Nuclcentroid is the centroid of nucleus.

Relative displacement can be calculated by

Haralick’s gray-level cooccurrence matrices (GLCMs) have been used very successfully in texture classification [29]. Out of 14 features outlined, we considered first 11 texture features suitable for our experiment.(a)Angular second moment: (b)Contrast: (c)Correlation: (d)Sum of squares: (e)Inverse difference moment: (f)Sum average: (g)Sum variance: (h)Sum entropy: (i)Entropy: (j)Difference variance: (k)Difference entropy:

Local binary pattern (LBP) [30] transforms an image into an array or integer labels. It is computed by comparing a given pixel with its neighbors: where is variant to changes of the mean gray value of the image.

Tamura’s texture features like coarseness, contrast, and directionality [31] are extracted which are purely based on human visual perception are extracted for our experiment.(a)Coarseness: (b)Contrast: (c)Directionality:

Edge orientation histogram (EOH) aims to build a histogram with the directions of gradients of the edges:

2.3. Classification Using SVM

The proposed classification system classify the Pap smear images into one of seven classes, namely, superficial squamous, intermediate squamous, columnar, mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma in situ. The manual classification of the entire dataset has been already done by the experts. Figure 2 shows sample single cell images in which first row represents three classes of normal image type and the second row represents four classes of malignant type. In the proposed classification system, we used SVM algorithm to classify the Pap smear images. This SVM is based on the work done by Vapnik et al. which is implemented as “LibSVM” [15, 32].

3. Experimental Results and Discussion

In the present work, features of Pap smear images such as relative size of nucleus and cytoplasm, dynamic range and first four moments of intensities of nucleus and cytoplasm, relative displacement of nucleus within the cytoplasm, gray level cooccurrence matrix features, local binary pattern histogram, tamura features, and edge orientation histogram are extracted. By the combinations of extracted features, the performance of various classification algorithms are analyzed and compared.

The step by step procedures of the cervical cancer classification method is illustrated as follows. Table 2 demonstrates the examples of preprocessing steps done in Pap smear using this method. The color images are converted into gray scales and further the segmentation of nucleus and cytoplasm is done in this stage. Table 3 describes the various features set extracted from the cytology images. Tables 4 and 5 provide the performances measures (precision and recall) for all categories with different combination of features set using SVM classifier. The different classifiers used in this experiment are depicted in Table 7. The classification and diagnostic performance (precision) of the SVM classifiers and its comparison with the NN classifiers are summarized in Table 6. The performance metric and ROC curve for all the classifiers are shown in Figure 3. Area under curve for various classifiers is shown in Table 8. The result of 10-fold cross-validation (confusion matrix) is depicted in Table 9.

Normal squamousIntermediate squamous Columnar Mild dysplasia Moderate dysplasia Severe dysplasia Carcinoma in situ

Original image

Grayscale image

Segmented results



Feature setFeatures

F1Relative size of nuclei and cytoplasm
F2Dynamic range and first four moments of intensities of nuclei and cytoplasm
F3Relative displacement of nucleus within the cytoplasm
F4Gray level cooccurrence matrix features
F5Local binary pattern histogram
F6Tamura features
F7Edge orientation histogram

SVM with linear kernelPrecision %
FeaturesNormal squamousIntermediate squamousColumnarMild dysplasiaModerate dysplasiaSevere dysplasiaCarcinoma in situ

F1, F2, F394.5489.5288.4377.7384.0647.3885.59
F4, F696.9491.9287.9980.5784.1029.9183.62
F4, F5, F696.0791.2786.9079.6983.6236.9084.50
F4, F5, F6, F796.2991.2785.5978.8283.8445.4184.28
F1, F2, F3, F4, F5, F6, F797.3893.8986.9087.3383.6258.5284.72

SVM with linear kernelRecall %
FeaturesNormal squamousIntermediate squamous ColumnarMild dysplasiaModerate dysplasiaSevere dysplasiaCarcinoma in situ

F1, F2, F390.4085.8884.1375.6581.8745.3582.19
F4, F690.4686.6784.4975.3481.0327.8082.09
F4, F5, F689.8985.1787.1078.6181.7234.5183.02
F4, F5, F6, F792.2987.8782.5975.0080.7843.5082.78
F1, F2, F3, F4, F5, F6, F791.5989.7180.3585.7880.0253.4883.79

ClassifiersPrecision %
Normal squamousIntermediate squamousColumnarMild dysplasiaModerate dysplasiaSevere dysplasiaCarcinoma in situ



C1Linear kernel SVM
C2Quadratic kernel SVM
C3RBF ( = 10) SVM
C4Multilayer perceptron SVM
C5Single layer neural network 10 nodes
C6Single layer neural network 30 nodes
C7NN, two hidden layers (10, 10) nodes

ClassifierNormal squamousIntermediate squamousColumnarMild dysplasticModerate dysplasticSevere dysplasticCarcinoma in situ


Normal squamousIntermediate squamousColumnarMild dysplasiaModerate dysplasticSevere dysplasticCarcinoma in situ

Normal squamous71210000
Intermediate Squamous36511000
Mild dysplasia1711158410
Moderate dysplasia0017121134
Severe dysplasia013123215735
Carcinoma in situ0003713137

In our work, the cell samples are collected from the public database which is free of sampling errors. In order to extract the precise features of the cell nucleus and cytoplasm, the segmentation of cell nucleus from cytoplasm relics a demanding issue. Segmentation is done through morphological filling operations, which is found to be better than the other methods in the detection of nucleus. In this work, the 23 unique features were selected and classified into seven sets, including relative size of nuclei and cytoplasm, dynamic range and first four moments of intensities of nuclei and cytoplasm, relative displacement of Nucleus within the cytoplasm, gray level cooccurrence matrix features, local binary pattern histogram, tamura features, and edge orientation histogram. The result shows that, by using the combinational feature set, it is able to classify all seven types of Pap smear images. It shows that all the dysplasia cells and carcinoma in situ classes have higher nuclear proportion and irregular nucleus and these findings are compatible with the human findings.

The output of SVM shows that the best precision for normal squamous (97.38%), intermediate squamous (93.89%), mild dysplasia (87.33%), severe dysplasia (58.52%), and carcinoma in situ (84.72%) is achieved through the combination of (F1, F2, F3, F4, F5, F6, and F7) feature set. With the single feature set F7, the accuracy rate of 89.35% is achieved in columnar type. Likewise the accuracy of 84.10% for moderate dysplasia is achieved through the combination of F4 and F6 feature set. These observations that show no single feature set produce the best results for all classes. Some feature set shows the predominance results for some classes and as an average, the best overall classification performances were obtained through the combination of all the seven feature sets. Again the recall values of SVM classifiers show that the better performance is obtained through combinational feature set.

The very low precision rate of 58.52% is only observed in severe dysplasia class by any combinations. Most of these image types do not follow the classification rules and it even shows the very poor segmentation results. The need of separate set of classification methods and unique feature selection procedures will improve the performance.

The classification performances of various classifiers are shown in Table 6. The first three classifiers are of SVM based kernel types and the fourth one is the SVM based multilayer perceptron. The other three are neural network based, where the first two use single layer neural network with 10 and 30 nodes and the third uses two hidden layers of (10,10) nodes. The results show that the better performance is achieved through linear kernel SVM classifier than any other classifier. In addition, the performance of the classifiers builds using SVM, outperformance of the NN classifier.

In order to evaluate 917 image instances, 10-fold cross-validation has been done, which hold out a percentage of their data set in each fold to properly evaluate the classifier. The results are shown as confusion matrix (Table 9) with the entire set of features. Out of 917 images, 71/74 normal squamous, 65/70 intermediate squamous, 85/98 columnar, 158/182 mild dysplasia, 121/146 moderate dysplasia, 157/197 severe dysplasia, and 137/150 carcinoma in situ are found to be correct. Except severe dysplasia and carcinoma in citu categories, this method provides compromising outputs.

4. Conclusion

In this paper, an improved method for classifying Pap smear images using selected texture image feature is proposed. The study reveals that the method not only helps in classification but also helps in selecting the features which are most suitable for all types of classes. These results show that there is no unique set of feature suitable for all classes. In this classification method, it is concluded from the analysis that the linear kernel SVM classifier out performs than any other classifier.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. O. Barinova, A. Kuzmishkina, A. Vezhnvets, and V. Vezhnvets, “Learning class specific edges for vanishing point estimation,” in Proceedings of the Graphicon, pp. 162–165, 2007. View at: Google Scholar
  2. S. Aswathy, M. A. Quereshi, B. Kurian, and K. Leelamoni, “Cervical cancer screening: current knowledge & practice among women in a rural population of Kerala, India,” Indian Journal of Medical Research, vol. 136, no. 2, pp. 205–210, 2012. View at: Google Scholar
  3. R. M. Demay, “Common problems in Papanicolaou smear interpretation,” Archives of Pathology and Laboratory Medicine, vol. 121, no. 3, pp. 229–238, 1997. View at: Google Scholar
  4. B. L. Craine, E. R. Craine, J. R. Engel, and N. T. Wemple, “A clinical system for digital imaging Colposcopy,” in Medical Imaging II, vol. 0914 of Proceedings of SPIE, pp. 505–511, Newport Beach, Calif, USA, June 1998. View at: Publisher Site | Google Scholar
  5. H. Doornewaard, Y. T. van der Schouw, Y. van der Graaf, A. B. Bos, and J. G. van den Tweel, “Observer variation in cytologic grading for cervical dysplasia of Papanicolaou smears with the PAPNET testing system,” Cancer, vol. 87, no. 4, pp. 178–183, 1999. View at: Google Scholar
  6. G. Castellano, L. Bonilha, L. M. Li, and F. Cendes, “Texture analysis of medical images,” Clinical Radiology, vol. 59, no. 12, pp. 1061–1069, 2004. View at: Publisher Site | Google Scholar
  7. M. E. Plissiti, C. Nikou, and A. Charchanti, “Automated detection of cell nuclei in Pap smear images using morphological reconstruction and clustering,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 2, pp. 233–241, 2011. View at: Publisher Site | Google Scholar
  8. P. Sobrevilla, E. Montseny, F. Vaschetto, and E. Lerma, “Fuzzy-based analysis of microscopic color cervical pap smear images: nuclei detection,” International Journal of Computational Intelligence and Applications, vol. 9, no. 3, pp. 187–206, 2010. View at: Publisher Site | Google Scholar
  9. N. M. Harandi, S. Sadri, N. A. Moghaddam, and R. Amirfattahi, “An automated method for segmentation of epithelial cervical cells in images of ThinPrep,” Journal of Medical Systems, vol. 34, no. 6, pp. 1043–1058, 2010. View at: Publisher Site | Google Scholar
  10. C. Bergmeir, M. García Silvente, and J. M. Benítez, “Segmentation of cervical cell nuclei in high-resolution microscopic images: a new algorithm and a web-based software framework,” Computer Methods and Programs in Biomedicine, vol. 107, no. 3, pp. 497–512, 2012. View at: Publisher Site | Google Scholar
  11. M.-H. Tsai, Y.-K. Chan, Z.-Z. Lin, S.-F. Yang-Mao, and P.-C. Huang, “Nucleus and cytoplast contour detector of cervical smear image,” Pattern Recognition Letters, vol. 29, no. 9, pp. 1441–1453, 2008. View at: Publisher Site | Google Scholar
  12. J. Holmquist, E. Bengtsson, O. Eriksson, B. Nordin, and B. Stenkvist, “Computer analysis of cervical cells. Automatic feature extraction and classification,” Journal of Histochemistry & Cytochemistry, vol. 26, no. 11, pp. 1000–1017, 1978. View at: Publisher Site | Google Scholar
  13. Y.-Y. Chou and L. G. Shapiro, “A hierarchical multiple classifier learning algorithm,” Pattern Analysis & Applications, vol. 6, no. 2, pp. 150–168, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  14. Y. Marinakis, G. Dounias, and J. Jantzen, “Pap smear diagnosis using a hybrid intelligent scheme focusing on genetic algorithm based feature selection and nearest neighbor classification,” Computers in Biology and Medicine, vol. 39, no. 1, pp. 69–78, 2009. View at: Publisher Site | Google Scholar
  15. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  16. N. Wei, J. You, K. Friehs, E. Flaschel, and T. W. Nattkemper, “An in situ probe for on-line monitoring of cell density and viability on the basis of dark field microscopy in conjunction with image processing and supervised machine learning,” Biotechnology and Bioengineering, vol. 97, no. 6, pp. 1489–1500, 2007. View at: Publisher Site | Google Scholar
  17. M. Wang, X. Zhou, F. Li, J. Huckins, R. W. King, and S. T. C. Wong, “Novel cell segmentation and online SVM for cell cycle phase identification in automated microscopy,” Bioinformatics, vol. 24, no. 1, pp. 94–101, 2008. View at: Publisher Site | Google Scholar
  18. D. Glotsos, P. Spyridonos, P. Petalas et al., “Computer-based malignancy grading of astrocytomas employing a support vector machine classifier, the WHO grading system and the regular hematoxylin-eosin diagnostic staining procedure,” Analytical and Quantitative Cytology and Histology, vol. 26, no. 2, pp. 77–83, 2004. View at: Google Scholar
  19. K. Akita and H. Kuga, “A computer method of understanding ocular fundus images,” Pattern Recognition, vol. 15, no. 6, pp. 431–443, 1982. View at: Publisher Site | Google Scholar
  20. Y.-F. Chen, P.-C. Huang, K.-C. Lin et al., “Semi-automatic segmentation and classification of pap smear cells,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 1, pp. 94–108, 2014. View at: Publisher Site | Google Scholar
  21. N. A. Mat-Isa, M. Y. Mashor, and N. H. Othman, “An automated cervical pre-cancerous diagnostic system,” Artificial Intelligence in Medicine, vol. 42, no. 1, pp. 1–11, 2008. View at: Publisher Site | Google Scholar
  22. E. Martin, Papsmear classification [M.S. thesis], Oersted DTU, Automation, Tech. Univ. Denmark, Copenhagen, Denmark, 2003.
  23. MDE Lab,
  24. N. Ampazis, G. Dounias, and J. Jantzen, “Pap-smear classification using efficient second order neural network training algorithms,” in Methods and Applications of Artificial Intelligence, vol. 3025 of Lecture Notes in Computer Science, pp. 230–245, Springer, Berlin, Germany, 2004. View at: Publisher Site | Google Scholar
  25. I. Iguyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at: Google Scholar
  26. G. P. Zhang, “Neural networks for classification: a survey,” IEEE Transactions on Systems, Man, and Cybernetics C: Applications and Reviews, vol. 30, no. 4, pp. 451–462, 2000. View at: Publisher Site | Google Scholar
  27. P. A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice-Hall International, London, UK, 1982.
  28. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 2002.
  29. P. P. Ohanian and R. C. Dubes, “Performance evaluation for four classes of textural features,” Pattern Recognition, vol. 25, no. 8, pp. 819–833, 1992. View at: Publisher Site | Google Scholar
  30. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  31. H. Tamura, S. Mori, and T. Yamawaki, “Textural features corresponding to visual perception,” IEEE Transactions on Systems, Man and Cybernetics, vol. 8, no. 6, pp. 460–473, 1978. View at: Publisher Site | Google Scholar
  32. C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1–27, 2011, View at: Google Scholar

Copyright © 2015 Edwin Jayasingh Mariarputham and Allwin Stephen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1864 Views | 901 Downloads | 18 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.