Table of Contents
Advances in Computer Engineering
Volume 2014 (2014), Article ID 454876, 15 pages
http://dx.doi.org/10.1155/2014/454876
Research Article

Feature Extraction with Ordered Mean Values for Content Based Image Classification

1Pimpri Chinchwad College of Engineering, Akurdi, Sec. 26, Pradhikaran, Nigdi, Pune, Maharashtra 411033, India
2Xavier Institute of Social Service, Dr. Camil Bulcke Path (Purulia Road), P.O. Box 7, Ranchi, Jharkhand 834001, India
3A. K. Choudhury School of Information Technology, University of Calcutta, 92 APC Road, Kolkata, West Bengal 700009, India

Received 24 July 2014; Revised 18 November 2014; Accepted 18 November 2014; Published 17 December 2014

Academic Editor: Lijie Li

Copyright © 2014 Sudeep Thepade et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST) for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene) dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

1. Introduction

Massive expansion of image data has been observed due to the use of digital cameras, Internet, and other image capturing devices in recent times. Classifying images has been considered as a vital research domain for efficient handling of image data as discussed by Lu and Weng in [1]. Recognition of images based on the content has been dependent on extraction of visual features from the dataset as suggested by Liu and Bai in [2], Agrawal et al. in [3], and Kekre and Thepade in [4]. Conventional approaches for feature extraction from images have considered binarization as a means to differentiate the image into higher and lower intensity values as adopted in one of their approaches by Kekre and Thepade in [5] and Shaikh et al. in [6], respectively. Multiple applications of binarization on graphic images and document images have been implemented, some of which were proposed by Ntirogiannis et al. [7], Sezgin and Sankur [8], and Yang and Yan [9]. A novel technique for feature extraction using values of ordered means has been proposed in this work. However, an image encompassed diverse features which can hardly be described with a single technique of feature extraction. Image recognition has been stimulated in the past by feature extraction with partial coefficient in transform domain as discussed by Kekre et al. [10]. Hence discrete sine transform and Kekre transform were applied on the images to extract partial coefficients as feature vectors in transform domain. The two transform domain techniques were compared for superior classification results and discrete sine transform (DST) was chosen over Kekre transform for fusion with the ordered mean feature extraction process for better classification results. It was evaluated for classification performance and was compared to existing widely used techniques for feature extraction. The results have clearly indicated superior performance of classification with multiview method of feature extraction with proposed technique over the existing techniques.

2. Related Work

Selection of threshold has been an important criterion for feature extraction with binarization. Threshold selection has been primarily categorized into three different categories, namely, mean threshold selection as adopted in some of the approaches for feature extraction using binarization suggested by Thepade et al. in [11, 12] and by Kekre et al. in [13, 14], global threshold selection proposed by Otsu [15], and local threshold selection proposed by Niblack [16], Sauvola and Pietikäinen [17], and Bernsen [18]. Binarization with mean threshold selection has been used to compute a mean threshold value for all the gray values present in an image based on which the gray values were divided into upper intensity groups and lower intensity groups. Shaikh et al. [6] have considered global threshold method for binarization proposed by Otsu [15] for calculation of a single threshold when two distinct peaks were identified in an image histogram and have portrayed efficiency in pattern recognition for images having different artifacts such as shadow and nonuniform illumination. Image binarization with Otsu’s [15] method of threshold selection has also been effective in optimizing the simultaneous classification of documents, photos, and logos as reported by Lins et al. [19]. The process of threshold selection has largely been affected by a number of parameters, namely, ambient illuminations, variance of gray levels within the object and the background, inadequate contrast, and so forth, as discussed by Chang et al. [20] and Gatos et al. [21]. Valizadeh et al. [22] have done image binarization using Niblack’s [16] technique and have calculated the local thresholds to binarize by using standard deviation and variance as measures of dispersion for better classification of degraded images. The uneven contrast and brightness of the image has been considered as an important factor by Sauvola and Pietikäinen [17] and Bernsen [18] during threshold calculation and has been efficiently used to categorize stained images as discussed by Hamza et al. [23] and Yang and Zhang [24]. Classification performance with the proposed methodology of feature extraction fused with a transform domain technique of feature extraction applying discrete sine transform (DST) was compared with feature extraction using binarization by existing techniques. The efficiency of the proposed method was established by the quantitative evaluation.

3. Existing Binarization Techniques for Feature Extraction

3.1. Technique 1

Traditional Otsu’s method in [6, 15] of global threshold selection has been widely used for image binarization. A single global threshold was computed in this method to binarize the image into higher intensity values and lower intensity values for feature extraction. The method searched for the threshold meticulously to diminish the intraclass variance. Derivation of weighted within-class variance was done by Assessment of the class probabilities for the gray level pixels was given by Next step corresponds to calculation of class means given by Thus, the total variance was given by the summation of within-class variance and between-class variance. The between-class variance was given by . The effect of using Otsu’s threshold for binarization has been demonstrated in Figure 1.

Figure 1: Binarization with Otsu’s method.
3.2. Technique 2

Local threshold selection proposed by Niblack in [16] and by Valizadeh et al. in [22] has given another binarization technique for feature extraction as in Figure 2. The popular method has selected thresholds for each pixel by sliding a rectangular window over the entire image. Local mean calculation along with standard deviation has been adopted for threshold calculation. The window size was considered as . The expression for threshold has been given by . Here, the constant has assumed value in between 0 and 1.

Figure 2: Binarization with Niblack’s method.
3.3. Technique 3

Sometimes the background surfaces of images were faded, having huge disparity, or tarnished, having uneven illumination. Sauvola’s method of local threshold selection in [17, 23] was proposed especially for binarization of these types of images. The method was an upgraded version of Niblack’s method and threshold selection was given by Standard value considered for was 0.5 and for was 128. Effect of binarization with Sauvola’s threshold has been shown in Figure 3.

Figure 3: Binarization with Sauvola’s method.
3.4. Technique 4

Bernsen’s method in [18, 24] for local threshold selection for binarization of images was based on contrast. The variation between maximum and minimum gray values was considered to estimate the contrast. Threshold calculation was done with a local window, where . Pixel inside the window was set to 0 for lower value of contrast compared to the threshold value within the window and it was set to 1 for higher value of contrast compared to the threshold in the local window. The effect of binarization with Bernsen’s threshold technique has been given in Figure 4.

Figure 4: Binarization with Bernsen’s method.
3.5. Technique 5

Calculation of a single mean threshold has been an effective technique for image binarization. Different techniques of feature extraction have been implemented by binarization of images with mean threshold as proposed by Thepade et al. in [11, 12] and by Kekre et al. in [13]. The techniques for feature extraction with mean threshold have divided the images into two different levels of intensity values (Guo and Wu [25]). Mean of values greater than the mean threshold has been taken for higher intensity values and mean of values smaller than mean threshold has been taken to estimate the lower intensity values. Thus, the technique helps in efficient extraction of feature vectors with binarization of images. The effect of binarization with mean threshold has been given in Figure 5.

Figure 5: Binarization with mean threshold method.

4. Proposed Methodology

The proposed approach has followed the fusion of feature extraction by ordered mean values with feature extraction using partial transform coefficients as described in the following subsections.

4.1. Feature Extraction with Ordered Mean Value

Primarily, the ordered mean values have been derived. At the outset, the technique has extracted the red (R), green (G), and blue (B) components in an image and has arranged the grey values in each component in descending order. Further it has divided the intensity values into subdivisions (where ). The ordered values were stored in a one-dimensional array ODA as shown in Algorithm 1. The descending values of mean of intensity values for each subdivision arranged in descending order were considered to form the feature vector of that block. The feature vectors from the blocks thus generated were combined to create the feature vector of the image. A block diagram of the proposed method has been given in Figure 6.

Algorithm 1

Figure 6: Block diagram of the proposed technique.
4.2. Applying Transforms

Application of image transforms allocates the higher frequency component of the image towards the upper end and the lower frequency component of the image towards the lower end of the image. The above stated characteristics of any transform were utilized to reduce the size of feature vectors of an image in the frequency domain by selection of partial coefficient. The authors have applied two different transforms to compare the classification results originated from the feature extraction process of individual transforms. At first, Kekre’s transform was applied to the images (Kekre et al. [26]). Transform matrix for Kekre’s transform can be of any size and was not in the power of 2 as is the case for most of the other transforms. All the values in upper diagonal and diagonal in the matrix were 1 and the lower diagonal part excluding the value just below the diagonal was 0. A generalized Kekre’s transform matrix has been given in Kekre transform matrix Further, discrete sine transform (DST) was separately applied to the images for feature vector generation (Kekre et al. [27]). It was defined by a sine transform matrix and was a linear and invertible function. The DST matrix was formed by rowwise arrangement of the sequences given in for , .

4.3. Feature Vector Extraction from Transformed Image Coefficients

The transformed coefficients obtained from the test images were stored as complete set of feature vectors. At the beginning, the size of the feature vector was the same as the size of the image. Subsequently, partial coefficients were extracted from the full set of feature vectors to identify the high frequency component at the upper portion of image which were crucial for image identification. Extraction of partial coefficients from the images was done in the manner shown in Figure 7.

Figure 7: Steps for feature extraction using partial coefficient.

4.4. Proposed Methodology for Classification

A fusion based framework was proposed for the classification process. The method has amalgamated the classification decision obtained from two different feature extraction methodologies and fused the results for a single final decision of class levels. Two different distance measures, namely, Canberra distance and city block distance, were used to measure the classification performances of two different techniques of feature extraction as given in where is the query image and is the training set image.

A normalization technique, namely, score normalization, was used for the purpose of fusion of the classification decisions obtained from each of the feature extraction techniques.

Equation (8) has given the process of calculating the final distance measure for classification by fusion with score normalization which was conducted over the mean and standard deviation of the fused distance of the first 30 nearest neighbours of the image to be classified (Walia and Pal [28]):

5. Experimental Verification

The proposed technique was tested with Wang dataset (10 categories with 1000 images) used by Li and Wang [29], Caltech dataset (20 categories with 2533 images), and Oliva and Torralba (OT-Scene) dataset (8 categories with 2688 images) used by Walia and Pal [28]. The three datasets are extensively used public datasets. An illustration of the original datasets considered has been shown in Figures 8, 9, and 10. Cross validation scheme has been applied to assess the classification performances for different feature vector extraction techniques as given by Sridhar in [30]. The system considered fold cross validation and value of was assigned to be 10. One subset out of the ten subsets was considered as the testing set and the rest of the subsets were considered to be training set. The method was iterated for 10 trials and final result of classification was inferred by combining the 10 results.

Figure 8: Sample images of different categories from Wang database.
Figure 9: Sample images of different categories from OT-Scene database.
Figure 10: Sample images of different categories from Caltech database.

6. Classification Methods

The performance measures were done with two different categories of classifiers as given below.

6.1. K-Nearest Neighbor (KNN) Classifier (Distance Based Classifier)

Principle of KNN classifier is to find out the nearest neighbour in the instance space. It follows Canberra distance and city block distance as given in (7) to designate the unknown instance with the same class of the identified nearest neighbour as discussed by Han et al. in [31].

6.2. RIDOR Classifier (Rule Based Classifier)

RIDOR classifier implements a set of if-then rules like other rule based classifiers. A single rule covered each database record by implementing mutually exclusive rules and mutually exhaustive rules. Classification has been initiated with an empty rule which was followed by increasing one rule as discussed by Kotsiantis in [32]. The training records covered by this rule were removed and the previous steps were repeated until the stopping criteria were met. The default rule was first generated by Ripple-Down Rule (RIDOR) learner. The exceptions having lowest error rate were generated for the default rule followed by generation of “best” exception for each exception. A tree-like expansion of exception was thus carried out with its leaf having the only default rule without exception.

7. Metrics of Evaluation

Evaluation was carried out primarily by considering the misclassification rate (MR) and score for classification by feature extraction with different numbers of ordered mean values as features as discussed in the proposed method to determine the optimal numbers of ordered mean values required as features for minimum misclassification rate (MR) and maximum score with different classifier environments. Further, the classification performance with proposed feature extraction technique was compared to the existing feature extraction techniques done by image binarization in terms of precision, recall, and accuracy. Different metrics of evaluation have standard definitions discussed by Sridhar [30] as follows.

7.1. Misclassification Rate (MR)

Incorrectly classified instances were measured using misclassification rate as in

7.2. F1 Score

Classification performance was measured by combining precision and recall (TP rate) to produce a metric known as score which is given as in

7.3. True Positive (TP) Rate/Recall

This metric signifies the likelihood of a classifier for true positive result and is given as in

7.4. Precision

This is the prospect for an object to be classified correctly as per the authentic value and is given as in (12)

7.5. Accuracy

It is considered as the capability of a classifier to categorize instances accurately. It is given in

8. Experimental Results

MATLAB 7.11.0 (R2010b) with Intel Core i5 Processor with 4 GB RAM was used to carry out the experimentations. Primarily, percentagewise comparison of classification results with KNN classifier for misclassification rate (MR) and score has been given in Tables 1 and 2 for different numbers of ordered mean values as feature vectors. The minimum misclassification rate (MR) of 6.4% and highest score of 70.9% were observed with eight-ordered mean values as feature vectors computed from eight descending order subdivisions of the ordered one-dimensional array.

Table 1: Comparison of misclassification rate (MR) for classification with KNN classifier.
Table 2: Comparison of score for classification with KNN classifier.

Classification with RIDOR classifier was possible up to four descending ordered mean values as feature vectors. Misclassification rate (MR) was increasing and score was degrading for higher numbers of mean values as feature vectors. Feature extraction by calculating three-ordered means in descending order as feature vectors from three descending ordered subdivisions has shown the least misclassification rate (MR) and best score by proposed method of feature extraction as observed in Tables 3 and 4, respectively, for RIDOR classifier. The minimum misclassification rate observed was 8.3% and the maximum score observed was 61.8% with three-ordered mean values. Categorywise best classification performance for all sets of feature vectors considered for classification has been shown by the Dinosaur category and the worst classification performance with RIDOR classifier was found with the gothic structure category for all sets of feature vectors.

Table 3: Comparison of misclassification rate (MR) for classification with RIDOR classifier.
Table 4: Comparison of score for classification with RIDOR classifier.

The proposed technique of feature extraction with ordered mean values was further tested with Caltech dataset and OT-Scene dataset along with Wang dataset for score value of classification as given in Table 5. The experiment was carried out using KNN classifier and RIDOR classifier. The confusion matrices have been given in Tables 6, 7, 8, 9, 10, and 11.

Table 5: Evaluation of score with three datasets for feature extraction with ordered mean value.
Table 6: Confusion matrix for Wang dataset for feature extraction with ordered mean (KNN classifier).
Table 7: Confusion matrix for Wang dataset for feature extraction with ordered mean (RIDOR classifier).
Table 8: Confusion matrix for OT-Scene dataset for feature extraction with ordered mean (KNN classifier).
Table 9: Confusion matrix for OT-Scene dataset for feature extraction with ordered mean (RIDOR classifier).
Table 10: Confusion matrix for Caltech dataset for feature extraction with ordered mean (KNN classifier).
Table 11: Confusion matrix for Caltech dataset for feature extraction with ordered mean (RIDOR classifier).

Further, classification with partial coefficient extracted from the two frequency domain techniques, namely, Kekre transform and discrete sine transform, was compared by precision results for classification done by KNN classifier as in Table 12.

Table 12: Comparison of score for Kekre transform and discrete sine transform.

The illustration in Figure 11 has clearly established that highest value of score of 0.683 for classification with partial coefficient extracted by applying discrete sine transform (DST) has exceeded the maximum value of score of 0.541 for classification with partial coefficient extracted by Kekre transform. The highest score was given for feature size of 12.5% of the actual size of the image by Kekre transform. On the other hand, DST has given the highest score of 0.68 for feature size of 0.012% of the actual image size. Hence, the feature size for classification was significantly small for discrete sine transform (DST) compared to Kekre transform. The features obtained from partial coefficient of 0.012% of actual image size by applying DST on the image were further assessed for classification results with two other datasets, namely, Caltech and OT-Scene datasets, along with Wang dataset for score and misclassification rate. The evaluation was done with KNN and RIDOR classifiers as shown in Table 13. The confusion matrices have been given in Tables 14, 15, 16, 17, 18, and 19.

Table 13: Evaluation of score with three datasets for feature extraction with partial coefficient discrete sine transform.
Table 14: Confusion matrix for Wang dataset for feature extraction with DST (KNN classifier).
Table 15: Confusion matrix for Wang dataset for feature extraction with DST (RIDOR classifier).
Table 16: Confusion matrix for OT-Scene dataset for feature extraction with DST (KNN classifier).
Table 17: Confusion matrix for OT-Scene dataset for feature extraction with DST (RIDOR classifier).
Table 18: Confusion matrix for Caltech dataset for feature extraction with DST (KNN classifier).
Table 19: Confusion matrix for Caltech dataset for feature extraction with DST (RIDOR classifier).
Figure 11: Graphical representation of comparison between score for Kekre transform and discrete sine transform.

Hence it was observed that feature extraction with partial coefficient by applying discrete sine transform was efficient and had much smaller feature size compared to Kekre transform. It was also observed that for both techniques in spatial domain and frequency domain, respectively, the KNN classifier has performed much better compared to the RIDOR classifier. Therefore, KNN classifier was chosen for fusion of the two feature extraction techniques, namely, feature extraction with ordered mean and feature extraction by partial coefficient selection, by applying discrete cosine transform for final classification results.

The average precision value obtained for individual techniques for feature extraction and the proposed fusion approach has been shown in Table 20.

Table 20: Average precision for individual techniques and fused technique.

The proposed fusion technique has the highest precision value compared to the individual techniques as seen in Table 20. Further, the fusion technique comprising feature extraction with ordered mean values and feature extraction with partial coefficients of discrete sine transform applied on images was compared for precision, recall, accuracy, misclassification rate, and score values of classification with respect to the existing techniques. The comparison has been given in Figure 12 and the confusion matrices have been shown in Tables 21, 22, 23, 24, 25, 26, 27, and 28.

Table 21: Confusion matrix for proposed technique.
Table 22: Confusion matrix for feature extraction from image bit planes with mean threshold selection.
Table 23: Confusion matrix for feature extraction from even + odd image using mean threshold.
Table 24: Confusion matrix for feature extraction by binarization with Bernsen’s local threshold technique.
Table 25: Confusion matrix for feature extraction by binarization with Sauvola’s local threshold technique.
Table 26: Confusion matrix for feature extraction by ternary threshold selection.
Table 27: Confusion matrix for feature extraction by binarization with Niblack’s local threshold technique.
Table 28: Confusion matrix for feature extraction by binarization with Otsu’s global threshold technique.
Figure 12: Comparison of the proposed technique with existing techniques of feature extraction for classification with KNN classifier.

Finally, it was observed from the illustration in Figure 12 that classification with fusion of the proposed methodology of feature extraction with ordered mean values and fractional coefficient extraction by applying discrete sine transform has surpassed the classification results for state-of-the-art techniques of feature extraction and has made noteworthy contribution to enhancing classification performance.

9. Conclusion

The authors have presented a novel method for feature extraction with ordered mean values for recognition of images based on content. Multiview feature extraction for image classification was performed by fusing the proposed method with another existing feature extraction technique named partial coefficient selection from transformed images by applying discrete sine transform on the images. The novel approach has outperformed the classification performance of the existing feature extraction techniques. The work can be extended towards feature extraction of images essential for content based image classification in vital areas like weather forecasting, medical science, defence activities, and many more. It can also be used as a precursor for content based image retrieval from huge image databases.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. D. Lu and Q. Weng, “A survey of image classification methods and techniques for improving classification performance,” International Journal of Remote Sensing, vol. 28, no. 5, pp. 823–870, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Liu and T. Bai, “Automatic images classification based on multi-features combined with MIL,” in Proceedings of the IEEE 4th International Congress on Image and Signal Processing, vol. 1, pp. 118–121, 2011.
  3. S. Agrawal, N. K. Verma, P. Tamrakar, and P. Sircar, “Content based color image classification using SVM,” in Proceedings of the 8th International Conference on Information Technology: New Generations (ITNG '11), pp. 1090–1094, Las Vegas, Nev, USA, April 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. H. B. Kekre and S. D. Thepade, “Image Retrieval using augmented block truncation coding techniques,” in Proceedings of the International Conference on Advances in Computing, Communication and Control (ICAC3 '09), pp. 384–390, January 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. H. B. Kekre and S. D. Thepade, “Image Retrieval using augmented block truncation coding techniques,” in Proceedings of the International Conference on Advances in Computing, Communication and Control (ICAC3 '09), pp. 384–390, ACM, January 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. S. H. Shaikh, A. K. Maiti, and N. Chaki, “A new image binarization method using iterative partitioning,” Machine Vision and Applications, vol. 24, no. 2, pp. 337–350, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Ntirogiannis, B. Gatos, and I. Pratikakis, “An objective evaluation methodology for document image binarization techniques,” Proceedings of the 8th IAPR Workshop on Document Analysis Systems, 2008. View at Google Scholar
  8. M. Sezgin and B. Sankur, “Survey over image thresholding techniques and quantitative performance evaluation,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 146–168, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Yang and H. Yan, “An adaptive logical method for binarization of degraded document images,” Pattern Recognition, vol. 33, no. 5, pp. 787–807, 2000. View at Publisher · View at Google Scholar · View at Scopus
  10. H. B. Kekre, S. D. Thepade, A. Viswanathan, A. Varun, P. Dhwoj, and N. Kamat, “Palm print identification using fractional coefficients of Sine/Walsh/Slant transformed palm print images,” Communications in Computer and Information Science, vol. 145, pp. 214–220, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Thepade, R. Das, and S. Ghosh, “Image classification using advanced block truncation coding with ternary image maps,” in Proceedings of the International Conference on Advances in Computing, Communication and Control, vol. 361 of Communications in Computer and Information Science, pp. 500–509, 2013.
  12. S. Thepade, R. Das, and S. Ghosh, “Performance comparison of feature vector extraction techniques in RGB color space using block truncation coding for content based image classification with discrete classifiers,” in Proceedings of the Annual IEEE India Conference (INDICON '13), pp. 1–6, Mumbai, India, December 2013. View at Publisher · View at Google Scholar
  13. H. B. Kekre, S. Thepade, R. K. Kumar Das, and S. Ghosh, “Multilevel block truncation coding with diverse color spaces for image classification,” in Proceedings of the International Conference on Advances in Technology and Engineering (ICATE '13), pp. 1–7, IEEE, Mumbai, India, January 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. H. B. Kekre, S. Thepade, R. Das, and S. Ghosh, “Performance boost of block truncation coding based image classification using bit plane slicing,” International Journal of Computer Applications, vol. 47, no. 15, pp. 45–48, 2012. View at Google Scholar
  15. N. Otsu, “A threshold selection method from gray -level histogram,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Publisher · View at Google Scholar · View at Scopus
  16. W. Niblack, An Introduction to Digital Image Processing, Prentice Hall, Englewood Cliffs, NJ, USA, 1998.
  17. J. Sauvola and M. Pietikäinen, “Adaptive document image binarization,” Pattern Recognition, vol. 33, no. 2, pp. 225–236, 2000. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Bernsen, “Dynamic thresholding of gray level images,” in Proceedings of the International Conference on Pattern Recognition (ICPR ’86), pp. 1251–1255, 1986.
  19. R. D. Lins, S. J. Simske, J. Fan et al., “Image classification to improve printing quality of mixed-type documents,” in Proceedings of the 10th International Conference on Document Analysis and Recognition (ICDAR '09), pp. 1106–1110, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. Y.-F. Chang, Y.-T. Pai, and S.-J. Ruan, “An efficient thresholding algorithm for degraded document images based on intelligent block detection,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '08), pp. 667–672, SMC, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. B. Gatos, I. Pratikakis, and S. J. Perantonis, “Efficient binarization of historical and degraded document images,” in Proceedings of the 8th IAPR International Workshop on Document Analysis Systems (DAS '08), pp. 447–454, Nara, Japan, September 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Valizadeh, N. Armanfard, M. Komeili, and E. Kabir, “A novel hybrid algorithm for binarization of badly illuminated document images,” in Proceedings of the 14th International CSI Computer Conference (CSICC '09), pp. 121–126, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Hamza, E. Smigiel, and A. Belaid, “Neural based binarization techniques,” in Proceedings of the 8th International Conference on Document Analysis and Recognition, vol. 1, pp. 317–321, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Yang and Z. Zhang, “A novel local threshold binarization method for QR image,” in Proceedings of the International Conference on Automatic Control and Artificial Intelligence (ACAI '12), pp. 224–227, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. J.-M. Guo and M.-F. Wu, “Improved block truncation coding based on the void-and-cluster dithering approach,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 211–213, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. H. B. Kekre, S. Thepade, A. Athawale, A. Shah, P. Verlekar, and S. Shirke, “Energy compaction and image splitting for image retrieval using kekre transform over row and column feature vectors,” International Journal of Computer Science and Network Security, vol. 10, no. 1, pp. 289–298, 2010. View at Google Scholar
  27. H. B. Kekre, S. Thepade, and A. Maloo, “Comprehensive performance comparison of Cosine, Walsh, Haar, Kekre, Sine, slant and Hartley transforms for CBIR with fractional coefficients of transformed image,” International Journal of Image Processing, vol. 5, no. 3, pp. 336–351, 2011. View at Google Scholar
  28. E. Walia and A. Pal, “Fusion framework for effective color image retrieval,” Journal of Visual Communication and Image Representation, vol. 25, pp. 1335–1348, 2014. View at Google Scholar
  29. J. Li and J. Z. Wang, “Automatic linguistic indexing of pictures by a statistical modeling approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1075–1088, 2003. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Sridhar, Image Features Representation and Description Digital Image Processing, India Oxford University Press, New Delhi, India, 2011.
  31. J. Han, M. Kamber, and J. Pei, “Classification: advanced methods,” in Data Mining Concepts and Techniques, pp. 423–425, Morgan Kaufmann Publishers, Waltham, Mass, USA, 3rd edition, 2011. View at Google Scholar
  32. S. B. Kotsiantis, “Supervised machine learning: a review of classification techniques,” Informatica, vol. 31, no. 3, pp. 249–268, 2007. View at Google Scholar · View at MathSciNet · View at Scopus