Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 373254, 10 pages
http://dx.doi.org/10.1155/2014/373254
Research Article

Completed Local Ternary Pattern for Rotation Invariant Texture Classification

School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, Nibong Tebal, 14300 Penang, Malaysia

Received 20 December 2013; Accepted 11 February 2014; Published 7 April 2014

Academic Editors: G. C. Gini and L. Li

Copyright © 2014 Taha H. Rassem and Bee Ee Khoo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP) and the Completed Local Binary Count (CLBC), have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP) drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP) is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP) operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP) scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

1. Introduction

Nowadays, texture analysis and classification have become one of the important areas of computer vision and image processing. They play a vital role in many applications such as visual object recognition and detection [1, 2], human detector [3], object tracking [4], pedestrian classification [5], image retrieval [6, 7], and face recognition [8, 9].

Currently, many textures feature extraction algorithms that have been proposed to achieve a good texture classification. Most of these algorithms are focusing on how to extract distinctive texture features that are robust to noise, rotation, and illumination variance. These algorithms can be classified into three categories [10]. The first category is the statistical methods such as polar plots and polarograms [11], texture edge statistics on polar plots [12], moment invariants [13], and feature distribution method [14]. The second category is the model based methods such as simultaneous autoregressive model (SAR) [15], Markov model [16], and steerable pyramid [17]. The third category is the structural methods such as topological texture descriptors [18], invariant histogram [19], and morphological decomposition [20]. All of these algorithms as well as many other algorithms are reviewed briefly in many review papers [10, 21, 22].

Local Binary Pattern (LBP) operator was proposed by Ojala et al. [23] for rotation invariant texture classification. It has been modified and adapted for several applications such as face recognition [8, 9] and image retrieval [7]. The LBP extraction algorithm contains two main steps, that is, the thresholding step and the encoding step. This is shown in Figure 1. In the thresholding step, all the neighboring pixel values in each pattern are compared with the value of their central pixel of the pattern to convert their values to binary values (0 or 1). This step helps to get the information about the local binary differences. Then in the encoding step, the binary numbers obtained from the thresholding step are encoded and converted into a decimal number to characterize a structural pattern. In the beginning, Ojala et al. [24] represented the texture image using textons histogram by calculating the absolute difference between the gray level of the center pixel of a specific local pattern and its neighbors. Then the authors proposed the LBP operator by using the sign of the differences between the gray level of the center pixel and its neighbors of the local pattern instead of magnitude [23]. LBP proposed by Ojala et al. [23] has become the research direction for many computer vision researchers. This is because it is able to distinguish the microstructures such as edges, lines, and spots. The researchers aim to increase the discriminating property of the texture feature extraction to achieve impressive rotation invariant texture classification. So, many of the variants of the LBP have been suggested and proposed for rotation invariant texture classification. The center-symmetric Local Binary Pattern (CS-LBP) proposed by Heikkil et al. [25] is an example for that. Unlike the LBP, they compared center-symmetric pairs of pixels to get the encoded binary values. Liao et al. [26] proposed Dominant LBP (DLBP) by selecting the dominant patterns from all rotation invariant patterns. Tan and Triggs [27] presented a new texture operator which is more robust to noise. They encoded the neighbor pixel values into 3-valued codes instead of 2-valued codes by adding a user threshold. This operator is known as a Local Ternary Pattern (LTP). Guo et al. [28] combined the sign and magnitude differences of each pattern with all central gray level values of all patterns to propose a completed modeling of LBP, called completed LBP (CLBP). Khellah [29] proposed a new method for texture classification, which combines Dominant Neighborhood Structure (DNS) and traditional LBP. Zhao et al. [30] proposed a novel texture descriptor, called Local Binary Count (CLBC). They used the thresholding step such as in LBP. Then they discarded the structural information from the LBP operator by counting the number of value 1’s in the binary neighbor sets instead of encoding them.

373254.fig.001
Figure 1: LBP operator.

Although, some LBP variant descriptors such as CLBP and CLBC have achieved remarkable classification accuracy, they inherit the LBP weaknesses. The LBP suffers from two main weaknesses. It is sensitive to noise and sometimes may classify two or more different patterns falsely to the same class as shown in Figures 2 and 3. The LTP descriptor is more robust to noise than LBP. However, the latter weakness may appear with the LTP as well as with LBP.

373254.fig.002
Figure 2: The example for LBP operator’s noise sensitivity.
373254.fig.003
Figure 3: Similar LBP codes for two different texture patterns.

In this paper, we are enhancing the LTP texture descriptor to increase its discriminating property by presenting a completed modeling for LTP operator and proposing an associated completed Local Ternary Pattern (CLTP). The experimental results illustrate that CLTP is more robust to noise, rotation, and illumination variance, and it achieves higher texture classification rates than CLBP and CLBC. The rest of this paper is organized as follows. Section 2 briefly reviews the LBP, LTP, CLBP, and CLBC. Section 3 presents the new CLTP scheme. Then, in Section 4, the experimental results of different texture databases are reported and discussed. Finally, Section 5 concludes the paper.

2. Related Work

In this section, a brief review of the LBP, LTP, CLBP, and CLBC is provided.

2.1. Brief Review of LBP

As shown in Figure 1, the LBP operator is computed for the center pixel by comparing the intensity value of it with the intensity values of its neighbors. This process can be expressed mathematically as follows: where and denote the gray value of the center pixel and gray value of the neighbor pixel on a circle of radius , respectively, and is the number of the neighbors. Bilinear interpolation estimation method is used to estimate the neighbors that do not lie exactly in the center of the pixels. and are rotation invariant of LBP and uniform rotation invariant of LBP, respectively. These two enhanced LBP operators are proposed by Ojala et al. [23].

After completing the encoding step for any LBP operators, that is, and , the histogram can be created based on the following equation: where is the maximal LBP pattern value.

2.2. Brief Review of LTP

Tan and Triggs [27] presented a new texture operator which is more robust to noise. The LBP is extended to 3-valued codes . Figure 4 shows an example of LTP operator. The mathematical expression of the LTP can be described as follows: where , , , and are defined before in (1) and denotes the user threshold. After thresholding step, the upper pattern and lower pattern are constructed and coded as shown in Figure 4. The LTP operator is the concatenation of the code of the upper pattern and the lower pattern.

373254.fig.004
Figure 4: LTP operator.
2.3. Brief Review of Completed LBP (CLBP)

The completed LBP (CLBP) descriptor was proposed by Guo et al. [28] in order to improve the performance of LBP descriptor. As shown in Figure 5, the image local difference is decomposed into two complementary components: the sign component and the magnitude component . Consider the following:

fig5
Figure 5: A sample pattern: (a) sign component (LBP_S code); (b) magnitude components (LBP_M code (assume threshold = 29)).

Then, the is used to build the CLBP-Sign (CLBP_S), whereas the is used to build CLBP-magnitude (CLBP_). The CLBP_S and CLBP_ are described mathematically as follows: where , , , and are defined before in (1) and in (6) denotes the mean value of in the whole image.

The CLBP_S is equal to LBP, whereas the CLBP_M measures the local variance of magnitude. Furthermore, Guo et al. used the value of the gray level of each pattern to construct a new operator, called CLBP-Center (CLBP_C). The CLBP_C can be described mathematically as follows: where denotes the gray value of the center pixel and the is the average gray level of the whole image. Guo et al. combined their operators into joint or hybrid distributions and achieved a remarkable texture classification accuracy [28].

2.4. Brief Review of Completed LBC (CLBC)

The Local Binary Count (LBC) was proposed by Zhao et al. [30]. Unlike the LBP and all its variants, the authors just counted the number of value 1’s of the thresholding step instead of encoding them. The LBC can be described mathematically as follows: Similar to CLBP, the authors [30] extended the LBC to completed LBC (CLBC). The CLBC_S, CLBC_, and CLBC_C were also combined into joint or hybrid distributions and they were used for rotation invariant texture classification. The CLBC_ and CLBC_C can be described mathematically as follows: where , , , and are defined in (1), (6), and (7). An example of LBC is shown in Figure 6.

373254.fig.006
Figure 6: LBC operator.

In [28], the rotation invariant LBP () is used to construct the . The is simplified in this paper as as well as the proposed CLTP operator.

3. Completed Local Ternary Pattern (CLTP)

In this section, we propose the framework of CLTP. Similar to CLBP [28], the LTP is extended to completed modeling LTP (CLTP). As mentioned before, the LTP is more robust to noise than LBP. Furthermore, construct the associated completed Local Ternary Pattern that will help to enhance and increase its the discriminating property. The mathematical model of LTP is shown in (3). In CLTP, local difference of the image is decomposed into two sign complementary components and two magnitude complementary components as follows: where , , and are defined before in (1) and (3).

Then the and are used to build the and , respectively, as follows:

The is the concatenation of the and as follows: where , , , , and in (11) are defined before in (3).

Similar to , the is built using the two magnitude complementary components and as follows: where , , , , and in (13) and (14) are defined before in (3) and is defined in (6).

Moreover, the and can be mathematically described as follows: where , and is the average gray level of the whole image.

The proposed CLTP operators are combined into joint or hybrid distributions to build the final operator histogram like the CLBP and CLBC [28, 30], respectively. In the CLTP, the operators of the same type of pattern; that is, the upper and the lower pattern are combined first into joint or hybrid distributions. Then their results are concatenated to build the final operator histogram. That mean number of bins of CLTP is double in size than the number of bins of CLBP.

4. Experiments and Discussion

In this section, a series of experiments are performed to evaluate the proposed CLTP. Four large and comprehensive texture databases are used in these experiments. They are the Outex database [31], Columbia-Utrecht Reflection and Texture (CUReT) database [32], UIUC database [33], and XU_HR database [34]. Empirically, the threshold value is set to .

4.1. Dissimilarity Measuring Framework

Several metrics are proposed for measuring the dissimilarity between two histograms such as log-likelihood ratio, histogram intersection, and chi-square statistic. Similar to [28, 30], the chi-square statistic is used in this brief. The distance between two histograms and where can be mathematically described as follows: Furthermore, the nearest neighborhood classifier is used for classification in all experiments in this paper.

4.2. Experimental Results on the Outex Database

The Outex datasets (http://www.outex.oulu.fi/index.php?page=classification) include 16 test suites starting from Outex_TC_00010 (TC10) to Outex_TC_00016 (TC16) [31]. These suites were collected under different illumination, rotation, and scaling conditions. Outex_TC_00010 (TC10) and Outex_TC_00012 (TC12) are considered as famous two test suites in Outex datasets. These two suites have the same 24 classes of textures, which were collected under three different illuminates (“horizon,” “inca,” and “t184”) and nine different rotation angles (, , , , , , , , and ). For each illumination and rotation situation, each class has 20 nonoverlapping texture samples with size of . Examples of Outex images are shown in Figure 7. For TC10, images are used as training data. These are the images of “inca” illumination condition and “” angle rotation, whereas the images under the remaining rotation angles and “inca” illumination condition are used as testing data, that is, images. The training data in case of TC12 is the same as TC10, while all images under “t184” or “horizon” illumination conditions are used as testing data, that is, images for “t184” and images for “horizon.” The experimental results of TC10, TC12 (t184), and TC12 (horizon) are shown in Table 1.

tab1
Table 1: Classification rates (%) on TC10 and TC12 database.
373254.fig.007
Figure 7: Some images from Outex database.

From Table 1, the following points can be observed. Firstly, the CLTP_S, CLTP_M, CLTP_M/C, and CLTP_S_M/C performed better than the similar CLBP and CLBC types operators. Secondly, the CLTP_S and CLTP_M showed really great discrimination capability than CLBP_S and CLBP_M, and CLBC_S and CLBC_M, respectively, where the accuracy difference exceeded in some cases. Thirdly, the CLTP_S/M worked well with TC10 and TC12 for ( and ), with TC12 for ( and ) and only with TC12 under “t184” illumination condition for ( and ). Finally, the CLTP_S/M/C achieved the best classification accuracy with TC10 and TC12 for ( and ) and TC10 and TC12 under “t184” illumination condition for ( and ).

4.3. Experimental Results on CUReT Database

The CUReT dataset (http://www.robots.ox.ac.uk/~vgg/research/texclass/index.html) has 61 texture classes. In each class, there are 205 images subjected to different illumination and viewpoints conditions [32]. The images in each class have different viewing angle shots. Out of 250 images in each class, there are 118 image shots whose viewing angles are lesser than . Examples of CUReT images are shown in Figure 8. From these types of images, only 92 images are selected after converting to gray scale and cropping to . In each class, images from 92 are used as training data, while the remaining are used as testing data. The final classification accuracy is the average percentage over a hundred random splits. The CUReT average classification for is shown in Table 2. It is easier to note that CLTP has better performance than CLBP and CLBC with the CUReT database. Except CLTP_S/M/C, all CLTP operators are achieving higher classification rates than other CLBP and CLBC for all cases of images and at every radius. The CLTP_S/M/C achieved the best classification rates for all training images at radius 3, for 6, 23, and 64 training images at radius 2 and for 6 training images only at radius 1, while at radius 1, the CLBP_S/M/C achieved the best classification rates for 12, 23, and 64 training images and for 12 training images at radius 2.

tab2
Table 2: Classification rates (%) on CUReT database.
373254.fig.008
Figure 8: Some images from CUReT dataset.
4.4. Experimental Results on UIUC Database

The UIUC database has 25 classes containing the images captured under significant viewpoint variations. In each class, there are 40 images with resolution of . Examples of UIUC images are shown in Figure 9. In each class, images from 40 are used as training data, while the remaining are used as testing data. The final classification accuracy is the average percentage over a hundred random splits. The UIUC average classification for is shown in Table 3. Except in case of CLTP_S/M/C, all CLTP operators achieved a higher performance than CLBP and CLBC operators for all number of training images at radiuses 1, 2, and 3. The CLTP_S/M/C outperformed the CLBP_S/M/C and CLBC_S/M/C for all number of training images when , , and , . On the other hand, the CLBC_S/M/C is the best one for small number of , that is, 5 and 10; CLBP_S/M/C is the best one for and the CLTP_S/M/C is the best one for .

tab3
Table 3: Classification rates (%) on UIUC database.
373254.fig.009
Figure 9: Some images from UIUC database.
4.5. Experimental Results on XU-HR Database

The XU_HR database has 25 classes with 40 high resolution images () in each class. Examples of XU-HR images are shown in Figure 10. In each class, images from 40 are used as training data, while the remaining are used as testing data. The final classification accuracy is the average percentage over a hundred random splits. The XU_HR average classification for is shown in Table 4. In XU_HR database, the CLTP achieved higher classification rates than CLBP for all training images with all types of radiuses. The CLTP_S/M/C achieved an impressive classification rate reaching 99% when at (, ). We compared this database only with CLBP since HU_HR results using CLBC are not available in [30].

tab4
Table 4: Classification rates (%) on XU_HR database.
373254.fig.0010
Figure 10: Some images from XU-HR database.

5. Conclusion

To overcome some drawbacks of LBP, an existing LTP operator is extended to build a new texture operator, defined as completed Local Ternary Pattern. The proposed associated completed Local Ternary Pattern (CLTP) scheme is evaluated using four challenging texture databases for rotation invariant texture classification. The experimental results in this paper demonstrate the superiority of the proposed CLTP against the new existing texture operators, that is, CLBP and CLBC. This is because the proposed CLTP is insensitive to noise and has a high discriminating property that leads to achieve impressive classification accuracy rates.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to sincerely thank Z. Guo for sharing the source codes of CLBP.

References

  1. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “SUN database: large-scale scene recognition from abbey to zoo,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 3485–3492, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Lee, D. K. Han, and H. Ko, “Reinforced adaboost learning for object detection with local pattern representations,” The Scientific World Journal, vol. 2013, Article ID 153465, 14 pages, 2013. View at Publisher · View at Google Scholar
  3. X. Wang, T. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in Proceedings of the 12th IEEE International Conference of Computer Vision, pp. 32–39, 2009.
  4. V. Takala and M. Pietikäinen, “Multi-object tracking using color, texture and motion,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–7, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Enzweiler and D. M. Gavrila, “A multilevel mixture-of-experts framework for pedestrian classification,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp. 2967–2979, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. F. Qiao, C. Wang, X. Zhang, and H. Wang, “Large scale near-duplicate celebrity web images retrieval using visual and textual features,” The Scientific World Journal, vol. 2013, Article ID 795408, 11 pages, 2013. View at Publisher · View at Google Scholar
  7. S. Murala, R. P. Maheshwari, and R. Balasubramanian, “Local tetra patterns: a new feature descriptor for content-based image retrieval,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2874–2886, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. B. Zhang, Y. Gao, S. Zhao, and J. Liu, “Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor,” IEEE Transactions on Image Processing, vol. 19, no. 2, pp. 533–544, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Y. Choi, Y. M. Ro, and K. N. Plataniotis, “Color local texture features for color face recognition,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 1366–1380, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Zhang and T. Tan, “Brief review of invariant texture analysis methods,” Pattern Recognition, vol. 35, no. 3, pp. 735–747, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. L. S. Davis, “Polarograms: a new tool for image texture analysis,” Pattern Recognition, vol. 13, no. 3, pp. 219–223, 1981. View at Google Scholar · View at Scopus
  12. M. A. Mayorga and L. C. Ludeman, “Shift and rotation invariant texture recognition with neural nets,” in Proceedings of the IEEE International Conference on Neural Networks, IEEE World Congress on Computational Intelligence, vol. 6, pp. 4078–4083, June 1994. View at Scopus
  13. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at Google Scholar
  14. M. Pietikäinen, T. Ojala, and Z. Xu, “Rotation-invariant texture classification using feature distributions,” Pattern Recognition, vol. 33, no. 1, pp. 43–52, 2000. View at Google Scholar · View at Scopus
  15. R. L. Kashyap and A. Khotanzad, “A model-based method for rotation invariant texture classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 4, pp. 472–481, 1986. View at Google Scholar · View at Scopus
  16. F. S. Cohen, Z. Fan, and M. A. Patel, “Classification of rotated and scaled textured images using Gaussian Markov random field models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 2, pp. 192–202, 1991. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Greenspan, S. Belongie, R. Goodman, and P. Perona, “Rotation invariant texture recognition using a steerable pyramid,” in Proceedings of the 12th International Conference on Pattern Recognition, vol. 2, pp. 162–167, 1994.
  18. G. Eichmann and T. Kasparis, “Topologically invariant texture descriptors,” Computer Vision, Graphics and Image Processing, vol. 41, no. 3, pp. 267–281, 1988. View at Google Scholar · View at Scopus
  19. R. K. Goyal, W. L. Goh, D. P. Mital, and K. L. Chan, “Scale and rotation invariant texture analysis based on structural property,” in Proceedings of the IEEE 21st International Conference on Industrial Electronics, Control, and Instrumentation, vol. 2, pp. 1290–1294, November 1995. View at Scopus
  20. W. K. Lam and C. Li, “Rotated texture classification by improved iterative morphological decomposition,” IEE Proceedings Vision, Image and Signal Processing, vol. 144, no. 3, pp. 171–179, 1997. View at Google Scholar
  21. T. R. Reed and J. M. H. Dubuf, “A review of recent texture segmentation and feature extraction techniques,” CVGIP: Image Understanding, vol. 57, no. 3, pp. 359–372, 1993. View at Publisher · View at Google Scholar · View at Scopus
  22. R. W. Conners and C. A. Harlow, “A theoretical comparison of texture algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 3, pp. 204–222, 1980. View at Google Scholar · View at Scopus
  23. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on feature distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51–59, 1996. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Heikkil, M. Pietikinen, and C. Schmid, “Description of interest regions with center-symmetric local binary patterns,” in Proceedings of 5th Indian Conference of Computer Vision, Graphics and Image Processing, vol. 4338, pp. 58–69, 2006.
  26. S. Liao, M. W. K. Law, and A. C. S. Chung, “Dominant local binary patterns for texture classification,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 1107–1118, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1635–1650, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657–1663, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. F. M. Khellah, “Texture classification using dominant neighborhood structure,” IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3270–3279, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. Y. Zhao, D. S. Huang, and W. Jia, “Completed local binary count for rotation invariant texture classification,” IEEE Transactions on Image Processing, vol. 21, no. 10, pp. 4492–4497, 2012. View at Google Scholar
  31. T. Ojala, T. Maenpaa, M. Pietikainen, J. Viertola, J. Kyllonen, and S. Huovinen, “Outex—new framework for empirical evaluation of texture analysis algorithms,” in Proceedings of the 16th International Conference on Pattern Recognition, no. 1, pp. 701–7706, 2002.
  32. M. Varma and A. Zisserman, “Classifying images of materials: achieving viewpoint and illumination independence,” in Proceedings of 7th European Conference on Computer Vision, vol. 3, pp. 255–271, May 2002.
  33. S. Lazebnik, C. Schmid, and J. Ponce, “A sparse texture representation using local affine regions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1265–1278, 2005. View at Publisher · View at Google Scholar · View at Scopus
  34. Y. Xu, H. Ji, and C. Fermüller, “Viewpoint invariant texture description using fractal analysis,” International Journal of Computer Vision, vol. 83, no. 1, pp. 85–100, 2009. View at Google Scholar