Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 716782, 10 pages
http://dx.doi.org/10.1155/2014/716782
Research Article

Object Classification Using Substance Based Neural Network

Department of Information Technology, Bannari Amman Institute of Technology, Sathyamangalam, Tamil Nadu 638401, India

Received 26 January 2014; Accepted 24 February 2014; Published 1 April 2014

Academic Editor: P. Karthigaikumar

Copyright © 2014 P. Sengottuvelan and R. Arulmurugan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Object recognition has shown tremendous increase in the field of image analysis. The required set of image objects is identified and retrieved on the basis of object recognition. In this paper, we propose a novel classification technique called substance based image classification (SIC) using a wavelet neural network. The foremost task of SIC is to remove the surrounding regions from an image to reduce the misclassified portion and to effectively reflect the shape of an object. At first, the image to be extracted is performed with SIC system through the segmentation of the image. Next, in order to attain more accurate information, with the extracted set of regions, the wavelet transform is applied for extracting the configured set of features. Finally, using the neural network classifier model, misclassification over the given natural images and further background images are removed from the given natural image using the LSEG segmentation. Moreover, to increase the accuracy of object classification, SIC system involves the removal of the regions in the surrounding image. Performance evaluation reveals that the proposed SIC system reduces the occurrence of misclassification and reflects the exact shape of an object to approximately 10–15%.

1. Introduction

The recognition of objects in real-world applications needs the set of image features that are not processed using the nearby clutter or partial occlusion. The set of features must be performed at least voluntarily invariant to illumination, 3D projective transforms, and general object variations. Alternatively, the features must also be adequately processed in order to recognize accurate objects amongst numerous alternatives. The complexity of the object recognition crisis is owing to the lack in selecting the image features. Nevertheless, current research on the utilization of dense restricted set of features has revealed that effective object recognition can frequently be attained by utilizing local image descriptors processed using a huge number of processable positions.

The difficulty in object recognition is defined as a labeling crisis supported with the representations of specified objects. Usually, for a given image, the interested objects with their background information are processed over the system by assigning the exact labels or regions across the boundary in the specified image objects. The complexity in recognition of natural image problem is relatively processed using segmentation. With the devoid of at least a changeable recognition of objects, classification cannot be accomplished and without proper segmentation being performed the object recognition is not possible.

The revival of wavelet neural networks obtained an extensive use in digital image processing. Initially, object recognition problems were often solved by linear and quadratic discriminates. Object recognition is the study of how technology observes the surroundings, learns to differentiate objects of interest from their background, and makes sound and reasonable decisions about the categories of the objects. The best object recognizers in most instances are humans; yet they do not understand how humans recognize the objects.

One of the main disadvantages in applying learning techniques to image classification is that huge amounts of training data are required which is a very time consuming process due to the human time and effort involved in it. A Bayesian belief network for classification of events and measure the posterior probability of the event class given the input features is presented in [1]. In [2], an active learning approach is presented to solve the issue. Instead of accepting the examples in passive manner, active learning algorithm detects and chooses the unlabeled examples for the user to label, so that human effort is reduced to a certain extent.

Wavelet neural networks [3] merge the time-frequency localization process and self-studying, which build them mainly appropriate for the categorization of compound patterns. An efficient object recognition technique utilizing boundary demonstration employs a wavelet neural network [4] (WNN) to distinguish the singularities of the object bend illustration and to achieve the object categorization at the equivalent time in an automatic way. Studies of developmental shortfall in object detection have revealed that individuals who do not know about the shape of the object can demonstrate the identification impairments attached with standard object recognition. Many of the wavelet feature selection mechanisms develop feature selection based by evaluating each subband in a separate manner. In [5], feature dependence from various subbands are analyzed and simulated for a given set of image using statistical dependence. On the other hand, no developmental belongings with the conflicting object recognition with damaged portions have been accounted. The subsistence of nonobject visual diagnosis would point out that the improvement of standard recognition mechanisms does not processes on the expansion of customary object detection methods [6].

To enhance the process of object recognition in given image data, this paper explores the above mentioned techniques to prove an object classifier integrated with the substance based image classification using natural image. At first, LSEG segmentation is performed to remove the surrounding regions of the image then the wavelet transform is utilized to extract the set of features from the segmented region image. Finally, an object classifier is integrated with SIC system to obtain the exact shape of an object.

The rest of the paper is organized as follows. Section 2 first gives a brief account of related works. Section 3 elaborated the entire process of SIC system and examined the corresponding assumptions we have made. Section 4 describes the experimental evaluation and Section 5 describes the results obtained and comparison discussion with the existing work. Finally Section 6 ends with conclusion.

2. Existing System

In artificial system, Object detection [7] and recognition [8] are challenges for the image analysis research. The associated set of topics has attained more attention for many decades. Object detection is actually the process related to the object recognition scheme. When an image is processed, objects are to be recognized prior to further recognition scheme. To identify the possible objects of interest in the image, image segmentation is conducted within it. The paper [9] introduced a common active-learning structure for object recognition scheme. This structure provides a novel active-learning technique to construct the effective recognition systems.

In [10], the author designed a relationship between image segmentation and object recognition in the structure of the Expectation-Maximization (EM) algorithm. The author performed the segmentation as the mission of image explanation to object suggestions and expressed it as the E-step, whilst the M-step amounts to the object representations to the observations. These two mechanisms are achieved iteratively, thus concurrently segmenting an image and rebuilding it in terms of objects.

Consequently, object recognition is achieved exclusively on the manifestation of the object. But, appropriate information also subsists in the region neighboring the object. In [11], the author discovered the positions that manifestation and background information participate in object recognition. The paper [12] proposed a pulse-coupled neural network (PCNN) with multichannel (MPCNN) connecting and extensive fields for color image segmentation. Unlike conventional PCNN, pulse-based radial basis function units were established into the neurons of PCNN to decide the rapid links amongst neurons relating to their ethereal feature vectors and spatial closeness.

The paper [13] presented new homotopic image pseudoinvariants based on pixel wise analysis for face recognition. The homotopic image pseudoinvariants are designed beside with the most analogous image as the pattern. The proposed way can be processed to unlock set recognition. In [14], effective and easy skin tones for image recognition (named LiRA-features) are examined in the job of handwritten digit recognition. Two neural network classifiers are measured—a customized LiRA and a modular assemblage neural network. A technique of feature selection is planned in the beginning learning procedure of a neural network classifier.

Even though linear illustrations are normally utilized in image analysis, their presentations are hardly ever finest in precise applications. The paper [15] proposed a stochastic ascent algorithm for identifying the best representations of images in object recognition. Using the NN classifier, a linear representation exploits the presentation are required. Some of the existing work in object recognition has been on distinguishing disparate objects of a database. For fast recognition of objects from the image, the paper [16] proposed a novel technique which unites the feature implanting for the rapid reclamation of a support vector machine (SVM).

It is of huge significance to examine the domain alteration setback of image object recognition, since image data is accessible from a diversity of domains. To deal with the feature allocation change concern in the region boundary segmented image, domain adaptive input-output kernel learning (DA-IOKL) algorithm was presented in [17], which concurrently studies both the input and output kernels with a discriminative vector-valued function by minimizing the structural error. A new learning-based model [18] for the purpose of superresolving an image obtained using low spatial resolution was discussed. A set of superresolution for the test image was obtained using the low spatial resolution test image and a database comprising of both low and high spatial resolution images.

The capability of object recognition systems is to make out a huge number of objects controlled by a quality of images, feature extraction technique, variability, and complexity of fundamental objects and of collected data. To symbolize the efficient proposition during recognition, indexing with learning is combined and presented in two stages. In the first stage, the representation of the object is done in the indexing technique and then in the second stage, probabilistic models are used for the object recognition of the image [19]. But the dense set of objects is difficult to retrieve from the classified image. To overcome the issues discussed in the existing works, in this work, SIC system is used with wavelet transform to enhance the process of object recognition. In summary our contributions are as follows:(i)to propose a novel SIC system for natural image classification and for an effective object recognition,(ii)to preprocess the input natural images using spatial smoothing to obtain the clear image with no noise and clutters,(iii)to remove the regions present in the surroundings of natural image using spatial smoothing to increase the sensitivity,(iv)to quantize the numerous regions in the color quantization stage to separate the set of surrounding regions in the image,(v)to extract the configured set of features from the image using wavelet transform that categorizes the image data into the subbands, low and high frequency,(vi)finally to classify the natural images in an efficient manner using neural network classifier.

3. Proposed Substance Based Image Classification System

Here, we introduce a new SIC system that exploitsthe classification correlations. Our presentation of the substance based image classification system is divided into three phases. First, the process of object region extraction through the segmentation of image is performed. Second, quality feature extraction using wavelet transform model is applied to the region being extracted to obtain the quality features. Finally, neural network transform is applied to the extracted portion of natural images based on the configured quality features of the system. The architecture diagram of the proposed SIC system is defined in Figure 1.

716782.fig.001
Figure 1: Architecture diagram of the proposed SIC system.

From Figure 1, it is being observed that the proposed SIC system can be used to classify object natural images for object recognition. During the initial stage, the object region extraction is performed using the segmentation with the help of natural image. Then, wavelet transform-based structural configured set of features is presented from the extracted object region. With this, an object classifier is constructed by a neural network using the set of feature quality. SIC system involves the surrounding regions removal to develop the accuracy of object classification.

Usually, the conventional low-level features of images, extracted using wavelet transformation, are used to differentiate the image from the extracted set of outside images. For the obtained natural image object, the feature vectors are hauled out using a sliding window, as the local means offer more significant information. Besides this, a training procedure for feature vectors as input values is utilized to build a classifier of the image object unit. A neural network method is used to categorize a variety of object images and achieve accurate object image classification and is suitable for real application. The main processes involved in the SIC system are object region extraction and wavelet neural network transform. The brief description about object region extraction in building SIC system is discussed in the forthcoming section followed by wavelet neural network transform.

3.1. Object Region Extraction

The object region extraction in building the SIC system is divided into two parts, namely, image preprocessing using spatial smoothing and image segmentation using LSEG. The part involved in object region extraction is image preprocessing. The natural image given as input is preprocessed using spatial smoothing to obtain the clear image with no noise and clutters. Here, the preprocessing of image also includes the processes of removing the regions present in the surroundings of natural image as illustrated in Figures 2(a) and 2(b), respectively.

fig2
Figure 2: (a) Input image and (b) preprocessed image using spatial smoothing.

Spatial smoothing minimizes the effect of huge variation in frequency present in natural images and at the same time increases the sensitivity. With the help of Gaussian kernel, the shape of the function that has to be used for natural images is calculated in an iterative manner using the summative average value of each pixel with respect to its neighbouring pixel values as given below where denotes the number of neighboring pixels in the natural images, represents the new pixel value, and   represents the old pixel value to be considered. The second part involved in object region extraction is the action of image segmentation to be performed. The segmentation of image is performed to identify the region boundary of the given image. Once the segmentation process gets completed, the regions present outside the image are identified and extracted. For a given natural image, segmentation, JSEG method is implemented in this work. The surroundings regions of the given image are removed based on the image segmentation process and performed through the extraction of region boundary per pixel. The LSEG image segmentation is carried out using two steps, namely, color quantization and spatial segmentation as shown in Figures 3(a), 3(b), and 3(c), respectively.

fig3
Figure 3: (a) Preprocessed image, (b) color quantization applied, and (c) spatial segmented image.

In the color quantization stage, colors in the image are quantized to numerous regions that are further utilized to separate the set of surrounding regions in the image. This quantization is performed in the color space exclusive for presenting the spatial distributions. Then the given image pixel colors are reinstated by their matching color regions and form a region-map of the image. After color quantization, a label is specified to every group for obtained pixels which are processed to the similar color. The image pixel colors are restored by their resultant color region labels, creating a region map. The representation of region map is specified as follows.

Representation of region map+++++++++++++++− − − − − − − −− − − − − − − −− − − − − − − −$$$$$$$$$$$$$$$

The symbols (+, −, ) denotes the -value for three specified data points of the image. In order to determine the -value, is denoted as the point quantized image and with and represent

The -values are as follows: where represents the total number of quantized regions present in all elements in natural image. and denote the radius of the region. Once the segmentation using LSEG is completed, the object and background images are classified.

With the obtained segmented image, the process of extracting the object region and the surroundings are processed for further evaluation of extraction of features. To determine the similarity, the configured feature values of the object are compared with the values of other regions hauled out. The similarity between the regions is identified using Euclidean distance as shown below where denotes the feature vector of object region and denotes the feature vector of other regions present in natural images. With the obtained Euclidean distance value, the surrounding regions of the object are predicted.

3.2. Quality Feature Extraction Using Wavelet Transform

The second phase involved SIC system is to extract the configured set of features from the image; the wavelet transform categorizes the data into the subbands, low and high frequency. The wavelet transform method alters the obtained image data to the frequency domain by the transformation of the color space values, called a wavelet. Before processing with the feature extraction procedure, it is necessary to change the color model representation of the segmented image for ease of access.

Usually, the color space of the given image is processed in RGB format. Due to the complexity involved in the RGB color image processing, here, plan to use the hue, saturation, and intensity (HSI) color model. The process of converting the RGB color space to HSI is shown below. As the value of hue changes, the corresponding colors also change from red, through yellow, green, and finally back to red. Once the value of saturation changes, the corresponding colors change from unsaturated to completely saturated form. Finally, the variation in the intensity level causes the corresponding changes in magnitude from again unsaturated to fully saturated.

Once the images are converted into the HSI color space model, the configured set of features is extracted from the segmented image. Among the set of quality features, focus is made on the texture values. The texture feature of the given segmented image is expressed based on the statistical features that measures the degree of roughness on image to structural feature that expresses the regularity on the image and finally towards the spectrum feature that is purely based on the periodicity of the image texture value.

From the extracted object region of the image, the configured quality features are extracted as illustrated in Figures 4(a) and 4(b), respectively.

fig4
Figure 4: (a) Segmented image and (b) extracted image.

For a given region boundary image, consider as a pixel value; then the configured quality features are contrast that refers to the division of brightness of image that is given as

The directionality refers to the concise texture value of the region present in natural images

Entropy measures the randomness in the region boundary image, that is, uncertainty value,

Finally, Uniformity measures the state of obtaining the same form of region boundary limit

By obtaining these set of configured quality features, the SIC system achieves the more accurate information about the object in the image data.

3.3. Image Object Classification Using Neural Network Classifier

Finally, the third phase involved in the SIC system is to classify the natural images in an efficient manner using the neural network object classifier. The classification of objects using neural network classifier is divided into three layers, namely, an input layer, a hidden layer, and an output layer. The basic principle of neural network is that they are basic networks of “neurons” supported with the neural construction of the brain. The neuron for image classification consists of a set of input values () with its associated weights (). Followed by which a function () is used that adds the weights and maps to a final output () and the representation of it is shown in Figure 5.

716782.fig.005
Figure 5: Neural network classifier structure.

As illustrated in Figure 5, in order to classify the image object, set of input images is provided in the input layer which comprises featured values in a region segmented image that comprises inputs to the subsequent layer object classifier. The neural network classifier for image object classification organizes the set of input images one at a time. Once the process of input layer is accomplished for the purpose of classification of image, the errors from the primary classification is fed back as input into the process again and applied for the second iteration in the hidden layer. This repeated process is followed for a repeated number of times, that is, for numerous iterations. The output layer consists of classified parts of the region segmented image and discover by contrasting their categorization of the image objects with the recognized classification of the image object as illustrated in Figure 1. The process involved in neural network classifier for image object is illustrated in Figure 6. Finally, with the classified parts of region segmented image, the specified object is retrieved.

fig6
Figure 6: Application of neural network classifier for image object classification. (a) Input layer, (b) hidden layer, and (c) output layer.

Pseudocode 1 below describes the entire process of SIC system.

pseudo1
Pseudocode 1

By following the steps in Pseudocode 1, the specified object recognition is performed for the given set of input image. The performance of the proposed SIC system is evaluated in terms of segmentation accuracy, execution time, and classification accuracy. The process of removing the surrounding regions present in the images is defined in terms of segmentation accuracy that is measured in terms of percentage. The time taken to perform the process of object recognition based on the extracted features is referred to as the execution time that is measured in terms of seconds. Finally, classification accuracy defines the accuracy over recognizing the required shape of the object from the region boundary image that is measured in terms of percentage.

4. Experimental Evaluation

An experimental evaluation is conducted to estimate the performance of the proposed SIC system by extracting the coral image features dataset from UCI repository. The coral image features dataset contains image features extracted from a coral image collection. Four sets of features are offered based on the color histogram, color moments, color histogram layout, and cooccurrence. There are 68,040 photo images from diverse categories. From those set of images, only the natural images are extracted for experimental evaluation and tested with the experiments. The description of dataset is defined in Table 1.

tab1
Table 1: Dataset description.

The attribute information is discussed below with the four sets of features. From each image, four sets of features were extracted, namely, color histogram, color histogram layout, color moments, and cooccurrence texture. The HSV color space is divided into 32 subspaces each with 32 dimensions, specifying 8 ranges of H and 4 ranges of S. The value in every aspect in a color histogram of an image is the thickness of each color in the image.

Histogram intersection can be utilized to determine the similarity among two images. Each image is divided into 4 subimages (one horizontal split and one vertical split). Color histogram for every subimage is calculated. Histogram intersection is utilized to determine the similarity among two images. The color moments consists of 9 dimensions (), one for each of H, S, and V in HSV color space mean, standard deviation, and skewness. The euclidean distance among color moments of two images are utilized to signify the dissimilarity between two images. Images are changed to 16 gray-scale images. Cooccurrence in 4 directions is calculated with one horizontal, one vertical, and two diagonal directions.

5. Results and Discussion

In this section, we present the comparison evaluation results of both the proposed SIC system and the existing Boundary Representation and Wavelet Neural Network. Table 2 and Figure 7 that describes the performance of the proposed SIC 9 system with various parametric values.

tab2
Table 2: Tabulation for segmentation accuracy.
716782.fig.007
Figure 7: Measure of segmentation accuracy.

The segmentation accuracy is determined based on the number of pixels present in the given input image. The value of the proposed SIC system is compared with the boundary representation method.

Figure 7 describes the segmentation accuracy determined based on the number of pixels present in the given input image. Compared to the existing boundary representation scheme, the proposed SIC system provides higher rate of segmentation accuracy. This is because, the proposed SIC system perform the LSEG segmentation approach which performs the segmentation through color quantization in addition to the region map representation processes. So, the accuracy over the segmentation increases. When the segmentation performance level increases, the surrounding regions over the image are removed consistently. But in the existing approach, CWT curvature representation is implemented to identify the boundary of the object first and then the recognition takes place. The variance in segmentation is 10-11% high in the proposed SIC system.

Table 3 shows the execution time and based on the number of pixels present in the given input image. The value of the proposed SIC system is compared with the existing object recognition using boundary representation.

tab3
Table 3: Measure of execution time.

Figure 8 describes the determination of execution time based on the pixels in the given image. Compared to the existing boundary representation scheme, the proposed SIC system consumes less time for object recognition. This is because the object recognition is performed using three set of operations (segmentation, feature extraction, and classification) prior to it. As a result, automatically the time taken to retrieve the necessary object from the image consumes less time. But in the existing method which uses boundary representations that gives object recognition rate is higher. The variance in the execution time is 5–7% less in the proposed SIC system.

716782.fig.008
Figure 8: Measure of execution time.

Table 4 shows parameters for calculating classification accuracy by considering number of features. The classification accuracy is determined based on the number of features extracted from the given input image. The value of the proposed SIC system is compared with the existing object recognition using boundary representation.

tab4
Table 4: Tabulation for classification accuracy.

Figure 9 describes theclassification accuracy determined based on the number of features extracted from the given input image. Compared to the existing boundary representation scheme, the proposed SIC system provides higher rate of classification accuracy. Since the proposed SIC system used LSEG segmentation and wavelet transform for region boundary segmentation image outcome, the classification is performed in an easy manner using object classifier. The accurate information is obtained by applying the wavelet transform that categorizes the image data into sub-ands, low and high frequency. It also extracts the set of features to alter the obtained image data to the frequency domain by transformation of color space called as wavelet resulting in higher classification rate. The higher the classification accuracy, the more accurate the shape of the object recognition would be. The variance is 10-11% high in the proposed SIC system.

716782.fig.009
Figure 9: Measure of classification accuracy.

Finally, it is being observed that the proposed SIC system recognizes the shape of the object in the image more appropriately through SIC system and provides the accurate information about the object.

6. Conclusion

In this work, a substance based image classification is presented for image object recognition. To reduce the misclassification over the given image data, the background regions are removed from the given image database by adapting the LSEG segmentation. To obtain the more accurate information from the image object data, the wavelet transform is applied to obtain the configured quality features. Based on the feature set, the information about the image objects data from the region boundary images is obtained. Besides, the object classifier is implemented for classification of image to obtain the exact shape of the object. Experimental evaluation is conducted with the natural image dataset to check the performance of the proposed SIC system. Evaluation results revealed that the proposed SIC system achieved a higher classification rate by removing the surrounding regions of the image. Moreover, the feature extraction process provides the highest classification rate which enhances the performance of substance-based image classification system. At the same time, the proposed SIC system revealed that the occurrence of misclassification of image data is less and the acquiring of the image object data is in the rate of 13% compared to the existing work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. M. Das and A. C. Loui, “Event classification in personal image collections,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '09), pp. 1660–1663, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. A. J. Joshi, F. Porikli, and N. Papanikolopoulos, “Multi-class active learning for image classification,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09), pp. 2372–2379, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Goltsev and V. Gritsenko, “Investigation of efficient features for image recognition by neural networks,” Neural Networks, vol. 28, pp. 15–23, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Pan and L. Xia, “Efficient object recognition using boundary representation and wavelet neural network,” IEEE Transactions on Neural Networks, vol. 19, no. 12, pp. 2132–2149, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. K. Huang and S. Aviyente, “Wavelet feature selection for image classification,” IEEE Transactions on Image Processing, vol. 17, no. 9, pp. 1709–1720, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  6. L. Germine, N. Cashdollar, E. Düzel, and B. Duchaine, “A new selective developmental deficit: impaired object recognition with normal face recognition,” Cortex, vol. 47, no. 5, pp. 598–607, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. N. Kasabova, K. Dhoblea, N. Nuntalid, and G. Indiveri, “Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition,” Neural Networks, vol. 41, pp. 188–201, 2013. View at Google Scholar
  8. Y. Zheng, Y. Meng, and Y. Jin, “Object recognition using neural networks with bottom -up and top up pathways,” Elsevier Journal, 2011. View at Google Scholar
  9. S. Sivaraman and M. M. Trivedi, “A general active-learning framework for on-road vehicle recognition and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 267–276, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. I. Kokkinos and P. Maragos, “Synergy between object recognition and image segmentation using the expectation-maximization algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 8, pp. 1486–1501, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. D. Parikh, C. L. Zitnick, and T. Chen, “Exploring tiny images: the roles of appearance and contextual information for machine and human object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 1978–1991, 2012. View at Google Scholar
  12. H. Zhuang, K. Low, and W. Yau, “Multichannel pulse-coupled-neural-network-based color image segmentation for object detection,” IEEE Transactions on Industrial Electronics, vol. 59, no. 8, pp. 3299–3308, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Shinagawa, “Homotopic image pseudo-invariants for openset object recognition and image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1891–1901, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. X. Liu, A. Srivastava, and K. Gallivan, “Optimal linear representations of images for object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp. 662–666, 2004. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Chen and B. Bhanu, “Efficient recognition of highly similar 3D objects in range images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 172–179, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. Guo and Z. J. Wang, “Cross-domain object recognition via input-output Kernel analysis,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3108–3119, 2013. View at Google Scholar
  17. X. Chen and N. A. Schmid, “Empirical capacity of a recognition channel for single- and multipose object recognition under the constraint of PCA encoding,” IEEE Transactions on Image Processing, vol. 18, no. 3, pp. 636–651, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  18. P. P. Gajjar and M. V. Joshi, “New learning based super-resolution: use of DWT and IGMRF prior,” IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1201–1213, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  19. W. Li, G. Bebis, and N. G. Bourbakis, “3-D object recognition using 2-D views,” IEEE Transactions on Image Processing, vol. 17, no. 11, pp. 2236–2255, 2008. View at Publisher · View at Google Scholar · View at MathSciNet