Abstract

Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST) for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene) dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

1. Introduction

Massive expansion of image data has been observed due to the use of digital cameras, Internet, and other image capturing devices in recent times. Classifying images has been considered as a vital research domain for efficient handling of image data as discussed by Lu and Weng in [1]. Recognition of images based on the content has been dependent on extraction of visual features from the dataset as suggested by Liu and Bai in [2], Agrawal et al. in [3], and Kekre and Thepade in [4]. Conventional approaches for feature extraction from images have considered binarization as a means to differentiate the image into higher and lower intensity values as adopted in one of their approaches by Kekre and Thepade in [5] and Shaikh et al. in [6], respectively. Multiple applications of binarization on graphic images and document images have been implemented, some of which were proposed by Ntirogiannis et al. [7], Sezgin and Sankur [8], and Yang and Yan [9]. A novel technique for feature extraction using values of ordered means has been proposed in this work. However, an image encompassed diverse features which can hardly be described with a single technique of feature extraction. Image recognition has been stimulated in the past by feature extraction with partial coefficient in transform domain as discussed by Kekre et al. [10]. Hence discrete sine transform and Kekre transform were applied on the images to extract partial coefficients as feature vectors in transform domain. The two transform domain techniques were compared for superior classification results and discrete sine transform (DST) was chosen over Kekre transform for fusion with the ordered mean feature extraction process for better classification results. It was evaluated for classification performance and was compared to existing widely used techniques for feature extraction. The results have clearly indicated superior performance of classification with multiview method of feature extraction with proposed technique over the existing techniques.

Selection of threshold has been an important criterion for feature extraction with binarization. Threshold selection has been primarily categorized into three different categories, namely, mean threshold selection as adopted in some of the approaches for feature extraction using binarization suggested by Thepade et al. in [11, 12] and by Kekre et al. in [13, 14], global threshold selection proposed by Otsu [15], and local threshold selection proposed by Niblack [16], Sauvola and Pietikäinen [17], and Bernsen [18]. Binarization with mean threshold selection has been used to compute a mean threshold value for all the gray values present in an image based on which the gray values were divided into upper intensity groups and lower intensity groups. Shaikh et al. [6] have considered global threshold method for binarization proposed by Otsu [15] for calculation of a single threshold when two distinct peaks were identified in an image histogram and have portrayed efficiency in pattern recognition for images having different artifacts such as shadow and nonuniform illumination. Image binarization with Otsu’s [15] method of threshold selection has also been effective in optimizing the simultaneous classification of documents, photos, and logos as reported by Lins et al. [19]. The process of threshold selection has largely been affected by a number of parameters, namely, ambient illuminations, variance of gray levels within the object and the background, inadequate contrast, and so forth, as discussed by Chang et al. [20] and Gatos et al. [21]. Valizadeh et al. [22] have done image binarization using Niblack’s [16] technique and have calculated the local thresholds to binarize by using standard deviation and variance as measures of dispersion for better classification of degraded images. The uneven contrast and brightness of the image has been considered as an important factor by Sauvola and Pietikäinen [17] and Bernsen [18] during threshold calculation and has been efficiently used to categorize stained images as discussed by Hamza et al. [23] and Yang and Zhang [24]. Classification performance with the proposed methodology of feature extraction fused with a transform domain technique of feature extraction applying discrete sine transform (DST) was compared with feature extraction using binarization by existing techniques. The efficiency of the proposed method was established by the quantitative evaluation.

3. Existing Binarization Techniques for Feature Extraction

3.1. Technique 1

Traditional Otsu’s method in [6, 15] of global threshold selection has been widely used for image binarization. A single global threshold was computed in this method to binarize the image into higher intensity values and lower intensity values for feature extraction. The method searched for the threshold meticulously to diminish the intraclass variance. Derivation of weighted within-class variance was done by Assessment of the class probabilities for the gray level pixels was given by Next step corresponds to calculation of class means given by Thus, the total variance was given by the summation of within-class variance and between-class variance. The between-class variance was given by . The effect of using Otsu’s threshold for binarization has been demonstrated in Figure 1.

3.2. Technique 2

Local threshold selection proposed by Niblack in [16] and by Valizadeh et al. in [22] has given another binarization technique for feature extraction as in Figure 2. The popular method has selected thresholds for each pixel by sliding a rectangular window over the entire image. Local mean calculation along with standard deviation has been adopted for threshold calculation. The window size was considered as . The expression for threshold has been given by . Here, the constant has assumed value in between 0 and 1.

3.3. Technique 3

Sometimes the background surfaces of images were faded, having huge disparity, or tarnished, having uneven illumination. Sauvola’s method of local threshold selection in [17, 23] was proposed especially for binarization of these types of images. The method was an upgraded version of Niblack’s method and threshold selection was given by Standard value considered for was 0.5 and for was 128. Effect of binarization with Sauvola’s threshold has been shown in Figure 3.

3.4. Technique 4

Bernsen’s method in [18, 24] for local threshold selection for binarization of images was based on contrast. The variation between maximum and minimum gray values was considered to estimate the contrast. Threshold calculation was done with a local window, where . Pixel inside the window was set to 0 for lower value of contrast compared to the threshold value within the window and it was set to 1 for higher value of contrast compared to the threshold in the local window. The effect of binarization with Bernsen’s threshold technique has been given in Figure 4.

3.5. Technique 5

Calculation of a single mean threshold has been an effective technique for image binarization. Different techniques of feature extraction have been implemented by binarization of images with mean threshold as proposed by Thepade et al. in [11, 12] and by Kekre et al. in [13]. The techniques for feature extraction with mean threshold have divided the images into two different levels of intensity values (Guo and Wu [25]). Mean of values greater than the mean threshold has been taken for higher intensity values and mean of values smaller than mean threshold has been taken to estimate the lower intensity values. Thus, the technique helps in efficient extraction of feature vectors with binarization of images. The effect of binarization with mean threshold has been given in Figure 5.

4. Proposed Methodology

The proposed approach has followed the fusion of feature extraction by ordered mean values with feature extraction using partial transform coefficients as described in the following subsections.

4.1. Feature Extraction with Ordered Mean Value

Primarily, the ordered mean values have been derived. At the outset, the technique has extracted the red (R), green (G), and blue (B) components in an image and has arranged the grey values in each component in descending order. Further it has divided the intensity values into subdivisions (where ). The ordered values were stored in a one-dimensional array ODA as shown in Algorithm 1. The descending values of mean of intensity values for each subdivision arranged in descending order were considered to form the feature vector of that block. The feature vectors from the blocks thus generated were combined to create the feature vector of the image. A block diagram of the proposed method has been given in Figure 6.

Begin
() Let be an image with color components Red (R), Green (G) and Blue (B) respectively of size each.
() Represent intensity values with one dimensional array ODA.
() The gray values of each color component for the given image were arranged in descending order as “ORDERED ODA”.
() The intensity values of the respective color components were divided into subdivisions () where for feature
     extraction.
/*The feature vector of each color component was computed from each SubDiv by using the procedure
DESCENDINGMEAN()*/
DESCENDINGMEAN()
{
Read (, )
for   to  
for to
end
end
}
() Reiterate procedure DESCENDINGMEAN() for and evaluate classification performances with ()
     (while ().
() Stop when classification result with current numbers of descending mean values is lower compared to the immediate
     last calculated numbers of descending mean value.
End

4.2. Applying Transforms

Application of image transforms allocates the higher frequency component of the image towards the upper end and the lower frequency component of the image towards the lower end of the image. The above stated characteristics of any transform were utilized to reduce the size of feature vectors of an image in the frequency domain by selection of partial coefficient. The authors have applied two different transforms to compare the classification results originated from the feature extraction process of individual transforms. At first, Kekre’s transform was applied to the images (Kekre et al. [26]). Transform matrix for Kekre’s transform can be of any size and was not in the power of 2 as is the case for most of the other transforms. All the values in upper diagonal and diagonal in the matrix were 1 and the lower diagonal part excluding the value just below the diagonal was 0. A generalized Kekre’s transform matrix has been given in Kekre transform matrix Further, discrete sine transform (DST) was separately applied to the images for feature vector generation (Kekre et al. [27]). It was defined by a sine transform matrix and was a linear and invertible function. The DST matrix was formed by rowwise arrangement of the sequences given in for , .

4.3. Feature Vector Extraction from Transformed Image Coefficients

The transformed coefficients obtained from the test images were stored as complete set of feature vectors. At the beginning, the size of the feature vector was the same as the size of the image. Subsequently, partial coefficients were extracted from the full set of feature vectors to identify the high frequency component at the upper portion of image which were crucial for image identification. Extraction of partial coefficients from the images was done in the manner shown in Figure 7.

4.4. Proposed Methodology for Classification

A fusion based framework was proposed for the classification process. The method has amalgamated the classification decision obtained from two different feature extraction methodologies and fused the results for a single final decision of class levels. Two different distance measures, namely, Canberra distance and city block distance, were used to measure the classification performances of two different techniques of feature extraction as given in where is the query image and is the training set image.

A normalization technique, namely, score normalization, was used for the purpose of fusion of the classification decisions obtained from each of the feature extraction techniques.

Equation (8) has given the process of calculating the final distance measure for classification by fusion with score normalization which was conducted over the mean and standard deviation of the fused distance of the first 30 nearest neighbours of the image to be classified (Walia and Pal [28]):

5. Experimental Verification

The proposed technique was tested with Wang dataset (10 categories with 1000 images) used by Li and Wang [29], Caltech dataset (20 categories with 2533 images), and Oliva and Torralba (OT-Scene) dataset (8 categories with 2688 images) used by Walia and Pal [28]. The three datasets are extensively used public datasets. An illustration of the original datasets considered has been shown in Figures 8, 9, and 10. Cross validation scheme has been applied to assess the classification performances for different feature vector extraction techniques as given by Sridhar in [30]. The system considered fold cross validation and value of was assigned to be 10. One subset out of the ten subsets was considered as the testing set and the rest of the subsets were considered to be training set. The method was iterated for 10 trials and final result of classification was inferred by combining the 10 results.

6. Classification Methods

The performance measures were done with two different categories of classifiers as given below.

6.1. K-Nearest Neighbor (KNN) Classifier (Distance Based Classifier)

Principle of KNN classifier is to find out the nearest neighbour in the instance space. It follows Canberra distance and city block distance as given in (7) to designate the unknown instance with the same class of the identified nearest neighbour as discussed by Han et al. in [31].

6.2. RIDOR Classifier (Rule Based Classifier)

RIDOR classifier implements a set of if-then rules like other rule based classifiers. A single rule covered each database record by implementing mutually exclusive rules and mutually exhaustive rules. Classification has been initiated with an empty rule which was followed by increasing one rule as discussed by Kotsiantis in [32]. The training records covered by this rule were removed and the previous steps were repeated until the stopping criteria were met. The default rule was first generated by Ripple-Down Rule (RIDOR) learner. The exceptions having lowest error rate were generated for the default rule followed by generation of “best” exception for each exception. A tree-like expansion of exception was thus carried out with its leaf having the only default rule without exception.

7. Metrics of Evaluation

Evaluation was carried out primarily by considering the misclassification rate (MR) and score for classification by feature extraction with different numbers of ordered mean values as features as discussed in the proposed method to determine the optimal numbers of ordered mean values required as features for minimum misclassification rate (MR) and maximum score with different classifier environments. Further, the classification performance with proposed feature extraction technique was compared to the existing feature extraction techniques done by image binarization in terms of precision, recall, and accuracy. Different metrics of evaluation have standard definitions discussed by Sridhar [30] as follows.

7.1. Misclassification Rate (MR)

Incorrectly classified instances were measured using misclassification rate as in

7.2. F1 Score

Classification performance was measured by combining precision and recall (TP rate) to produce a metric known as score which is given as in

7.3. True Positive (TP) Rate/Recall

This metric signifies the likelihood of a classifier for true positive result and is given as in

7.4. Precision

This is the prospect for an object to be classified correctly as per the authentic value and is given as in (12)

7.5. Accuracy

It is considered as the capability of a classifier to categorize instances accurately. It is given in

8. Experimental Results

MATLAB 7.11.0 (R2010b) with Intel Core i5 Processor with 4 GB RAM was used to carry out the experimentations. Primarily, percentagewise comparison of classification results with KNN classifier for misclassification rate (MR) and score has been given in Tables 1 and 2 for different numbers of ordered mean values as feature vectors. The minimum misclassification rate (MR) of 6.4% and highest score of 70.9% were observed with eight-ordered mean values as feature vectors computed from eight descending order subdivisions of the ordered one-dimensional array.

Classification with RIDOR classifier was possible up to four descending ordered mean values as feature vectors. Misclassification rate (MR) was increasing and score was degrading for higher numbers of mean values as feature vectors. Feature extraction by calculating three-ordered means in descending order as feature vectors from three descending ordered subdivisions has shown the least misclassification rate (MR) and best score by proposed method of feature extraction as observed in Tables 3 and 4, respectively, for RIDOR classifier. The minimum misclassification rate observed was 8.3% and the maximum score observed was 61.8% with three-ordered mean values. Categorywise best classification performance for all sets of feature vectors considered for classification has been shown by the Dinosaur category and the worst classification performance with RIDOR classifier was found with the gothic structure category for all sets of feature vectors.

The proposed technique of feature extraction with ordered mean values was further tested with Caltech dataset and OT-Scene dataset along with Wang dataset for score value of classification as given in Table 5. The experiment was carried out using KNN classifier and RIDOR classifier. The confusion matrices have been given in Tables 6, 7, 8, 9, 10, and 11.

Further, classification with partial coefficient extracted from the two frequency domain techniques, namely, Kekre transform and discrete sine transform, was compared by precision results for classification done by KNN classifier as in Table 12.

The illustration in Figure 11 has clearly established that highest value of score of 0.683 for classification with partial coefficient extracted by applying discrete sine transform (DST) has exceeded the maximum value of score of 0.541 for classification with partial coefficient extracted by Kekre transform. The highest score was given for feature size of 12.5% of the actual size of the image by Kekre transform. On the other hand, DST has given the highest score of 0.68 for feature size of 0.012% of the actual image size. Hence, the feature size for classification was significantly small for discrete sine transform (DST) compared to Kekre transform. The features obtained from partial coefficient of 0.012% of actual image size by applying DST on the image were further assessed for classification results with two other datasets, namely, Caltech and OT-Scene datasets, along with Wang dataset for score and misclassification rate. The evaluation was done with KNN and RIDOR classifiers as shown in Table 13. The confusion matrices have been given in Tables 14, 15, 16, 17, 18, and 19.

Hence it was observed that feature extraction with partial coefficient by applying discrete sine transform was efficient and had much smaller feature size compared to Kekre transform. It was also observed that for both techniques in spatial domain and frequency domain, respectively, the KNN classifier has performed much better compared to the RIDOR classifier. Therefore, KNN classifier was chosen for fusion of the two feature extraction techniques, namely, feature extraction with ordered mean and feature extraction by partial coefficient selection, by applying discrete cosine transform for final classification results.

The average precision value obtained for individual techniques for feature extraction and the proposed fusion approach has been shown in Table 20.

The proposed fusion technique has the highest precision value compared to the individual techniques as seen in Table 20. Further, the fusion technique comprising feature extraction with ordered mean values and feature extraction with partial coefficients of discrete sine transform applied on images was compared for precision, recall, accuracy, misclassification rate, and score values of classification with respect to the existing techniques. The comparison has been given in Figure 12 and the confusion matrices have been shown in Tables 21, 22, 23, 24, 25, 26, 27, and 28.

Finally, it was observed from the illustration in Figure 12 that classification with fusion of the proposed methodology of feature extraction with ordered mean values and fractional coefficient extraction by applying discrete sine transform has surpassed the classification results for state-of-the-art techniques of feature extraction and has made noteworthy contribution to enhancing classification performance.

9. Conclusion

The authors have presented a novel method for feature extraction with ordered mean values for recognition of images based on content. Multiview feature extraction for image classification was performed by fusing the proposed method with another existing feature extraction technique named partial coefficient selection from transformed images by applying discrete sine transform on the images. The novel approach has outperformed the classification performance of the existing feature extraction techniques. The work can be extended towards feature extraction of images essential for content based image classification in vital areas like weather forecasting, medical science, defence activities, and many more. It can also be used as a precursor for content based image retrieval from huge image databases.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.