International Scholarly Research Notices

International Scholarly Research Notices / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 876386 | 14 pages | https://doi.org/10.1155/2013/876386

New Brodatz-Based Image Databases for Grayscale Color and Multiband Texture Analysis

Academic Editor: R. Schettini
Received26 Nov 2012
Accepted14 Jan 2013
Published24 Feb 2013

Abstract

Grayscale and color textures can have spectral informative content. This spectral information coexists with the grayscale or chromatic spatial pattern that characterizes the texture. This informative and nontextural spectral content can be a source of confusion for rigorous evaluations of the intrinsic textural performance of texture methods. In this paper, we used basic image processing tools to develop a new class of textures in which texture information is the only source of discrimination. Spectral information in this new class of textures contributes only to form texture. The textures are grouped into two databases. The first is the Normalized Brodatz Texture database (NBT) which is a collection of grayscale images. The second is the Multiband Texture (MBT) database which is a collection of color texture images. Thus, this new class of textures is ideal for rigorous comparisons between texture analysis methods based only on their intrinsic performance on texture characterization.

1. Introduction

It has long been argued that texture plays a key role in computer-based pattern recognition. Texture can be the only effective way to discriminate between different surfaces that have similar spectral characteristics [16].

Texture was early recognized as mainly a spatial distribution of tonal variations in the same band [7]. Different grayscale texture methods have been proposed based on different techniques [712]. For an objective and rigorous comparison between different texture analysis methods, it is important to use standard databases [13, 14]. The standard Brodatz grayscale texture album [15] has been widely used as a validation dataset [16, 17]. It is composed of 112 grayscale images representing a large variety of natural grayscale textures. This database has been used with different levels of complexity in texture classification [18], texture segmentation [19], and image retrieval [20]. A rotation invariant version of the Brodatz database was also proposed [21] and used for texture classification and retrieval [22, 23].

Recently, we have seen a growing interest in color texture [24]. This is a natural evolution of the field of texture, from grayscale to color texture. The use of color in texture analysis showed several benefits [2527]. In color texture, efforts have been made to find efficient methods to combine color and texture features [24]. Consequently, the evaluation of color texture methods requires images in which color and texture information are both sources of discriminative information. Many color texture databases have been proposed for the evaluation of color texture methods. The VisTex database from the MIT Media Lab, the Corel Stock Photo Library, the color Outex database [21], and the CUReT database [28] are the most widely used. Images from these databases have rich textural and chromatic content and are ideal for color texture methods.

In this paper, we examine the validation of texture methods from a different point of view. We start from a simple observation: an image has spectral and textural information, and both can be used in image description [7]. The spectral information can affect the performance of texture characterization due to the differences in the mathematical concept of these methods. A good example is the cooccurrence matrices method [7] and the texture spectrum method [8]. The first is more sensitive to spectral information because it uses the spectral values as they appear in the images while the second uses the relative spectral values [29]. A rigorous evaluation of texture methods, without the influence of spectral information, requires images with texture as the only source of discrimination. The use of such images will guarantee that texture methods are compared on the same basis of textural performance.

Here, we propose a new texture database in which images do not have discriminative spectral information. The aim is to provide the pattern recognition community new images that allow validation of texture analysis methods based only on texture information. To do so, we used basic yet efficient image processing techniques to produce texture images without any pure spectral informative content. The first database is the Normalized Brodatz Texture (NBT) database, which is a collection of grayscale textures derived from the Brodatz album. Images from the NBT database have the rich textural content of the original Brodatz textures. At the same time, their spectral content is uninformative. The second database is the Multiband Texture (MBT) database. This database is a collection of color images. The color of these images contributes only to form texture and does not have any discriminative value if used as pure spectral information. Images from these databases have different levels of complexity in terms of their intraband and interband spatial distributions. This allows developing texture characterization problems with various degrees of difficulty. The proposed databases along with the existing databases form a more complete dataset for the evaluation of texture methods.

The paper is organized as follows. In Section 2, we present the normalized grayscale Brodatz textures. Section 3 presents a comprehensive analysis of the chromatic and textural content of some color texture images from the VisTex database. Section 4 illustrates the concept of multiband texture using astronomical and remote sensing satellite images. In Section 5, a new multiband texture database is presented and analyzed. Section 6 presents experiments on multiband texture database. Conclusions are drawn in Section 7.

2. Normalized Grayscale Textures

This section presents the first texture category: grayscale texture. A good example of this type of texture is the 112 texture images of the Brodatz album. This album provides a very useful natural texture database, which has been widely used to evaluate texture discrimination methods [3033]. Texture from this album can be digitized into different gray-level intervals resulting in different background intensities. In Figure 1 we give an example of six different Brodatz textures organized into two sets: D32, D28, and D10 (top row); and D64, D95, and D75 (bottom row). For example, D32 has a black background while D28 and D10 have gray and white backgrounds, respectively. This background effect introduces discriminative information to these images, in addition to their initial texture content. Indeed, as shown in Figure 2, these textures have localized modes, and it is possible to discriminate between them with significant accuracy using only their histograms (i.e., background intensity, without using texture information).

It would be interesting to compare all texture analysis methods using images with the same gray-level interval so that the background intensity does not interfere in the texture discrimination process. Various simple, and yet efficient, techniques can be used to perform this task. In this paper we used linear stretching [34], histogram equalization [35], and contrast limited adaptive histogram equalization [34].

We removed the background effect of the Brodatz textures by normalizing them to the same eight-bit (256 gray levels) intensity interval. A good normalization algorithm needs to preserve the visual appearance of the texture of the original image, while redistributing the image gray levels in order to occupy the whole 256 intensity interval. To do so, the histograms of all the 112 Brodatz images were generated and visually analyzed. Then, different normalization techniques were tested on each image, and the one that redistributed the images’ gray values with the least visual alteration of the texture was selected. Figures 3 and 4 show the normalized images in Figure 1 and their corresponding histograms, respectively. Unlike the original images, the gray values of the normalized images occupy the whole 256 gray-level range and, consequently, it is not possible to discriminate between them using only first-order statistics. The use of texture information is necessary to discriminate between these normalized texture images.

We produced a new database called the Normalized Brodatz Texture (NBT) database, which is available online (http://pages.usherbrooke.ca/asafia/mbt/) to allow the validation of texture analysis methods based only on texture information.

3. Colored Textures

This section presents the second texture category referred to as colored texture. A representative example of this category is the VisTex database. It can be seen as a generalization of the Brodatz database from grayscale to color texture. In this section we will analyze the chromatic and textural content of some images from this database.

3.1. Chromatic Content Analysis

Figure 5 presents three typical natural color texture images (i.e., Fabric.0001, Wood.0002, and Water.0005; each image is 512 by 512 pixels) from the VisTex database. From a chromatic viewpoint, each texture image in Figure 5 has a predominantly monotone color: brown, brown-gray, and blue-green (from left to right). This monotone color is the result of the gray-level distribution of each of the three RGB (red, green, blue) channels shown in Figure 6. The histograms of these channels show well-localized peaks, and the differences between the three histograms of each texture are mainly attributed to shifts in the pixel intensities along the x-axis without a significant change in the histogram shape. The well-localized peaks produce the same background as for Brodatz images (Section 2). On the other hand, the correlation coefficients between the three RGB channels in each texture image show that they were strongly correlated (, see Table 1).



Fabric.00010.9980.9230.953
Wood.00020.9640.9090.972
Water.00050.9790.9320.914

3.2. Textural Content Analysis

In Figure 7 the three RGB channels of the Fabric.0001 texture image are presented separately. These three channels contain the same texture with different dominant gray-level intensities. In order to provide quantitative measurements of the texture similarity of these three RGB channels, cooccurrence matrix features (contrast and dissimilarity) were estimated separately for each channel using a moving window of five by five pixels and a displacement vector of one pixel in the horizontal direction (0°). Six textural channels (i.e., two features for each of the three channels) were generated and organized into two triplets (i.e., one triplet for each texture feature). The correlation coefficient () was then calculated for the three textural channels of each triplet in order to analyze the variation of texture between the three channels. As shown in Table 2, the texture features of the same image were highly correlated with .



Contrast0.9970.9600.975
Dissimilarity0.9980.9720.982

In order to provide more detailed evaluation measures, these correlations were also estimated between the rows of the three channels. The results are summarized in Figure 8. For a fixed texture feature, this figure gives the correlation coefficient (y-axis) between the th row ( is the index in the x-axis) of this texture feature image estimated in a fixed channel and the th row of the same texture feature image estimated in a different channel. This figure can be interpreted as correlation profiles along the row dimension of the texture feature images.

As shown in Figure 8, an important correlation exists between the texture information for the three channels of the Fabric.0001 texture. For both texture features (i.e., contrast and dissimilarity), the maximum correlation was recorded between the R and G channels ( close to 1). For the six correlation profiles, the minimum recorded correlation was 0.881, confirming that, just as for the chromatic content, the texture content of the three Fabric.0001 channels was highly correlated. Similar results were obtained for other VisTex images. This high similarity explains why methods using only one band to estimate texture are successful.

3.3. Colored Brodatz Texture Database

Given the strong chromatic content and the high textural similarity of VisTex images, it was possible to transform grayscale Brodatz images to color images similar to VisTex images. To do so, for each Brodatz image, two additional channels were generated to form pseudo-RGB color texture images by a simple gray-level shift. This produced color Brodatz texture images (Figure 9) with richly textured content (the same as the original Brodatz textures) and high informative color content similar to VisTex images. This process produced a gradient of colors (e.g., D44 and D95) that gave these images a natural appearance, while preserving the original Brodatz texture. The entire grayscale Brodatz album (112 images) was generalized from grayscale to color texture by random histogram shifting. We call this database the Colored Brodatz Texture (CBT) database, and it is available online (http://pages.usherbrooke.ca/asafia/mbt/).

4. Multiband Texture

This section presents the third category of texture referred to as multiband texture. A good example of this category is astronomical and remote sensing images. This section presents an analysis of the chromatic and textural content of some of these images.

4.1. Chromatic Content Analysis

Figure 10 presents three images where the first two are astronomical images from the Spitzer and Hubble NASA telescopes, and the third is from a remote sensing earth satellite. These images were taken with instruments having relatively high spatial resolutions (e.g., ~1.8 m for WorldView-2). This produces contrasting bands because it is possible to detect small details in the observed object. Indeed, as shown in Figure 11, the histograms of these images covered the whole 256 gray-level range. This is in contrast with images in Figure 6. On the other hand, the wavelengths depicted in each of the three-color composites are very different: for example, for the Tarantula Nebula image, emission at 775 nm is depicted in green and 0.826 nm in blue. This produces RGB bands that are less correlated compared with images with closer wavelengths, such as natural images taken in the visible spectrum domain. As shown in Table 3, the correlation coefficients between their RGB channels are relatively small compared with those of VisTex images given in Table 2 (e.g., and between the red and blue bands of the WorldView-2 and Tarantula Nebula images, resp.). These two characteristics related to the spatial resolution and the wavelength contribute to the formation of images with rich color content.



Galaxy IC-3420.8420.7660.751
Tarantula Nebula0.8690.3900.710
WorldView-20.606−0.1080.643

Quantitative measures of the color distribution of images in Figure 10 were carried out based on the histogram of the hue component of the HSI transform [36]. Figures 12(a) and 12(b) give the results obtained for the two images: Galaxy IC-342 and the portion of the WorldView-2 image showing a quarry. The x-axis represents colors ranging from hue = 0° to hue = 360° and the y-axis is the frequency of the corresponding color. These two figures show that the images in Figure 10 have a large color range instead of a specific localized color range, as was the case for VisTex images (Figures 12(c) and 12(d)). In these images, there is no color background; consequently, it is not possible to discriminate between these images based only on their color.

4.2. Textural Content Analysis

Cooccurrence-matrix-based features (contrast and dissimilarity) were estimated and analyzed (as described in Section 3) for two images in Figure 10 (Galaxy IC-342 and WorldView-2). The correlation coefficient () was then calculated between the three textural channels of these two images as described in Section 3. Results are summarized in Table 4. Overall, the comparison with Table 2 showed that images from this category of texture have less correlated textural content than images from the colored texture category (Table 2). The textural content of Galaxy IC-342 bands is less correlated than that of the WorldView-2 image. The textural content of some bands in WorldView-2 showed a very small correlation. This was the case, for example, for the dissimilarity of the red and blue channels ().


Galaxy IC-342WorldView-2

Contrast0.5440.4020.4940.7680.4310.728
Dissimilarity0.6340.5020.5400.7940.0870.365

For a detailed analysis of the textural content of images in Figure 13, per-row correlation coefficient () profiles were also estimated as described in Section 3. Except for two correlation profiles estimated between the red and the green bands of the WorldView-2 image, the ten other profiles showed small correlations. This means that the bands of these two images possess different textural content, which was in contrast with the VisTex images (Figure 8).

Given the low informative value of the chromatic content of these images, and the low correlation between the textural content of their spectral bands, we can conclude that, for these images, the most discriminative information is texture. This includes intraband and interband texture information. It would be useful to have a standard database in which images have the same characteristics as those studied in this section. A standard database would be useful for the validation of methods focusing only on the texture of color images because the color information has low informative content.

5. Developing the Multiband Texture (MBT) Database

We developed a new texture database for the validation of methods focusing on intraband and interband texture. It is referred to as the Multiband Texture (MBT) database. The key concept was to combine three different grayscale textures to form a new three-channel color texture. These grayscale textures were taken from the proposed NBT database (Section 2). As NBT textures do not have pure spectral discriminative information, the chromatic content of the resulting three-channel texture images do not have discriminative value. In addition, as each image in the NBT has rich texture content, the resulting color textures from the MBT have important intraband and interband discriminative textural content.

To have a clear idea of the visual appearance of images from this new database, Figure 14 presents a set of 15 images. The names of the three original NBT textures that form each multiband texture are indicated at the bottom of each image. For example, the D28D92D111 multiband texture indicates that the D28 Brodatz texture was used for the red channel, D92 for the green, and D111 for the blue. This figure shows that multiband textures have a wide variety of textures, including coarse and fine textures, such as D31D99D108 and D4D16D17, respectively; in addition, they have directional and random textures, such as D51D83D85 and D109D110D112, respectively.

We analyzed the chromatic and textural content of some images from the MTB database, exactly as was done in the previous sections. As shown in Figure 15, MBT images have rich color content with an almost flat hue histogram (Figures 15(a) and 15(b)). In addition, the correlation coefficients between the three RGB channels of these multiband textures were very small (). This demonstrates that the chromatic information has no discriminative value.

The per-row correlation of the textural content of multiband images showed a very low correlation (Figures 15(c) and 15(d)). The average of the 9 × 640 per-row correlation measures was 0.36 with a standard deviation of 0.088, whereas it was 0.99 with a standard deviation of 0.018 for the VisTex images.

The chromatic and textural content of the MBT images is similar to that of the astronomical and remote sensing images. However, some improvement over these latter images was found as the color distribution of the MBT images is richer (almost uniform color distribution) and their textural content is less correlated.

When we focus on studying only the textural content (intraband and interband spatial interactions) of color images, without being influenced by pure color information, MTB images are ideal. The color of MBT images is an intrinsic part of the texture itself. This color is the result of texture variations within and between the different spectral channels. The complete characterization of MBT images can only be achieved by the simultaneous analysis of the texture of its three channels.

The proposed MTB database can be seen as a generalization of the study of Rosenfeld et al. [37] in which the authors worked on a single multiband texture image generated artificially by introducing a spatial shift between the different bands. The spatial shift was used to amplify the differences between textures of the different bands. The MBT database shows more diversity in the visual appearance of texture and it has different complexity levels. The MBT is not totally new to the image processing community because it was developed from the well-documented Brodatz album. Previously acquired knowledge from the analysis of texture in the Brodatz album can therefore be useful for the analysis of MBT images.

Textures are usually classified as artificial for computer-generated textures and genuine for textures found in human surroundings [38]. Textures from the MBT have both aspects. The three channels of each multiband texture are real textures from the Brodatz album. At the same time, as the three channels of each multiband texture do not come from the same surface, multiband images are also artificial. Here we propose a new category of texture called hybrid texture—textures from the MBT database are from this new category. The visual appearance of the 154 textures composing this new database is very diverse: fine, medium, coarse, random and directional, and so forth. In addition, the texture of the three bands of each color image from the MBT database has different levels of similarity. This provides different levels of difficulty for characterizing interband texture information. This database is available online (http://pages.usherbrooke.ca/asafia/mbt/).

Based on our results, multiband texture can be defined by extending the definition proposed by Haralick et al. [7] for grayscale texture: the texture of color (or, in general, multiband) images is formed by the spatial distribution of the tonal variations in the same band plus the spatial distribution of the tonal variations across different bands. The first distribution in the proposed definition defines the grayscale texture, defined by Haralick in [7]. The second one defines the part of texture resulting from interband spatial variation. These two types of distributions define multiband texture. Both spatial distributions contribute in different amounts to form the whole texture of the color or multiband image.

An important aspect of this definition is that it clearly identifies a certain part of color as an intrinsic part of the texture. Indeed, gray-level variation across the different bands produces color and, when this variation is the result of interband texture differences, it is identified as part of texture. Consequently, this definition introduces a distinction between this chromatic part of texture and the pure chromatic image content that does not possess texture values.

6. Experiments on MBT Database

Section 5 showed that images from the MBT database have almost the same chromatic content and also have important intraband and interband spatial variations. This has two important implications. The first is that it is not possible to discriminate between MBT images using only chromatic information. The second is that it is not possible to discriminate between MBT images using only intraband texture characterization as it is usually the case for existing color texture databases. In this section we tested the validity of these two observations in the context of texture classification. For that, we carry out two independent classifications. The first used only spectral information (RGB values) and the second used only intraband texture information. We used a mosaic of eight textures (Figure 16) from the MBT database. Figures 17(a) and 17(b) indicate the names of the MBT images according to their relative positions in the mosaic and the three textures from the Normalized Brodatz Texture (Section 2) that were used to generate each MBT image.

6.1. Spectral Classification

The mosaic in Figure 16 was classified using several benchmark spectral-based algorithms including K-means [39], Isodata [39], maximum likelihood [35], and mean-shift [40]. The results for all of the tested algorithms showed that none of them were able to identify the eight textures. For the first three classification algorithms, for example, the results showed that all eight textures were almost evenly distributed over the entire area of the mosaic. This supported the observation in Section 5 related to the quasi-uniform distribution of the color content in MTB images. For the mean-shift segmentation algorithm, the boundaries of the detected regions were totally different from those of the eight textures. This result provides clear evidence that color texture images from the MBT database do not possess pure chromatic informative content. The highest overall classification rate for the spectral classifications was 12.5%.

6.2. Textural Extraction and Classification

Among the existing texture analysis methods, we selected the wavelet transform [35]. This transform is a powerful technique for the analysis and decomposition of images at multiple resolutions and different frequencies [35]. This property makes it especially suitable for the segmentation and classification of texture [9, 41, 42]. We used as texture feature the local energy measure of each wavelet subband [9] which has proven to be efficient in texture classification [9, 43, 44]: where is the energy estimated using a neighborhood of size , is the wavelet coefficient, and gives the spatial position.

The resulting output for a transformed single band at a fixed level is one approximate subband and three detail subbands (i.e., horizontal, vertical, and diagonal). For texture analysis, only the detail subbands were used [45]. As a result, the energy feature was estimated using only the three detail subbands separately. The choice of the wavelet function is a crucial step in texture analysis [46]. It is beyond the scope of this paper to present a detailed study of the effect of wavelet function characteristics on multiband texture discrimination. For the purposes of this study, different wavelet functions were tested. The best combination of wavelet function and moving window for energy feature calculation was the biorthogonal spline function and a moving window of 33 by 33. This was fixed empirically based on the overall classification accuracy of the mosaic in Figure 16.

We used one-level wavelet decomposition because more decomposition levels did not bring significant improvement. Consequently, the classification process used three texture feature images as input in the case of the intensity band and nine in the case of the three-band analysis.

Many classification processes have been proposed by the pattern recognition community in the literature. For our experiments, a simple minimum distance classifier scheme based on the Euclidean distance was used in order to test the discrimination power of texture features using a basic classifier. Training samples of 50 by 50 pixels each were selected from the center of each of the eight textures in Figure 16 to serve as reference data. The size of 50 by 50 pixels was first determined by evaluating the accuracy of the classifier with different sample sizes ranging from 10 × 10 to 100 × 100 pixels.

In color images, texture is usually extracted either in the intensity image component [4750] or in each of the three RGB bands separately [5153]. Both of the two strategies were tested in two different classifications. The first classification related to the first strategy used three texture bands (3 details subbands). The second one used nine texture bands (3 details bands × 3 spectral bands).

The classification results are summarized in Table 5. We can notice that that the intensity component did not preserve multiband texture. It only provided an overall classification rate of 22.6%. The three-band strategy provided better results with an overall classification rate of 46.8%. This means that for MBT images, the analysis of each band separately preserves better texture than the use of the intensity image component. However, none of the two strategies was able to achieve satisfactory classification rate of the MBT mosaic. This means that texture of this mosaic cannot be simplified into three independent texture plans or as one intensity component. Texture of MBT should be analyzed as a whole by considering the intraband and the interband spatial interactions.


Intensity Three band

Overall classification rate 22.646.8

7. Conclusion

This paper examined various fundamental issues of texture information in grayscale, color, and multiband images.

For grayscale texture, we showed that pure spectral information can have a discriminative role. To make texture the main discriminative source of information, we presented an improvement over the Brodatz texture database by normalizing it, in order to eliminate the grayscale background effect. This new database is referred to as the Normalized Brodatz Texture (NBT) database.

For color images, we introduced the concept of colored texture to identify the category of textures in which color is a background with important informative value that is dissociated from texture. Based on this concept, we proposed the Colored Brodatz Texture (CBT) database, which is an extension of the Brodatz texture database. This new database has the advantage of both preserving the rich textural content of the original Brodatz images and also having a wide variety of color content.

We introduced the concept of multiband texture to describe texture resulting from the combined effects of intraband and interband spatial variations. We showed that this type of texture exists in images with high spatial resolution and/or images with spectral channels having very different wavelengths. To study multiband textures, we proposed a new database referred to as the Multiband Texture (MBT) database. Images from this database have two important characteristics. First, their chromatic content—even if it is rich—does not have discriminative value, yet it contributes to form texture. Second, their textural content is characterized by high intraband and interband variation. These two characteristics make this database ideal for the texture analysis of color images without the influence of color information. It fills the gap for databases suitable for the analysis of generalized spatial interactions in multiband space. The classification results of the eight textures from the MBT database confirmed that this database can be used to develop intraband and interband texture-based analysis methods.

Acknowledgments

The authors are grateful to the Natural Sciences and Engineering Research Council of Canada (NSERC) for sponsoring this research through the Postgraduate Scholarship (PhD) awarded to S. Abdelmounaime and the Discovery Grant awarded to H. Dong-Chen.

References

  1. L. R. Sarker and J. E. Nichol, “Improved forest biomass estimates using ALOS AVNIR-2 texture indices,” Remote Sensing of Environment, vol. 115, no. 4, pp. 968–977, 2011. View at: Google Scholar
  2. X. Wang, N. D. Georganas, and E. M. Petriu, “Fabric texture analysis using computer vision techniques,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 1, pp. 44–56, 2011. View at: Publisher Site | Google Scholar
  3. A. Kassnera and R. E. Thornhilla, “Texture analysis: a review of neurologic MR imaging applications,” American Journal of Neuroradiology, vol. 31, no. 5, pp. 809–816, 2010. View at: Publisher Site | Google Scholar
  4. J. R. Smith, C. Y. Lin, and M. Naphade, “Video texture indexing using spatio-temporal wavelets,” in Proceedings of the International Conference on Image Processing (ICIP '02), vol. 2, pp. II/437–II/440, September 2002. View at: Google Scholar
  5. W. Phillips III, M. Shah, and N. D. Lobo, “Flame recognition in video,” Pattern Recognition Letters, vol. 23, no. 1–3, pp. 319–327, 2002. View at: Publisher Site | Google Scholar
  6. R. C. Nelson and R. Polana, “Qualitative recognition of motion using temporal texture,” CVGIP—Image Understanding, vol. 56, no. 1, pp. 78–89, 1992. View at: Google Scholar
  7. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973. View at: Google Scholar
  8. D. C. He and L. Wang, “Texture unit, texture spectrum, and texture analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 28, no. 4, pp. 509–512, 1990. View at: Publisher Site | Google Scholar
  9. A. Laine and J. Fan, “Texture classification by wavelet packet signatures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1186–1191, 1993. View at: Publisher Site | Google Scholar
  10. A. C. Bovik, M. Clark, and W. S. Geisler, “Multichannel texture analysis using localized spatial filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 55–73, 1990. View at: Publisher Site | Google Scholar
  11. H. Xin, Z. Liangpei, and W. Le, “Evaluation of morphological texture features for mangrove forest mapping and species discrimination using multispectral IKONOS imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 6, no. 3, pp. 393–397, 2009. View at: Publisher Site | Google Scholar
  12. A. Voisin, V. A. Krylov, G. Moser, S. B. Serpico, and J. Zerubia, “classification of very high resolution SAR images of urban areas using copulas and texture in a hierarchical Markov random field model,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 1, pp. 96–100, 2013. View at: Google Scholar
  13. R. M. Haralick, “Performance characterization in computer vision,” CVGIP—Image Understanding, vol. 60, no. 2, pp. 245–249, 1994. View at: Publisher Site | Google Scholar
  14. P. J. Phillips and K. W. Bowyer, “Empirical evaluation of computer vision algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 4, pp. 289–290, 1999. View at: Google Scholar
  15. P. Brodatz, Testures: A Photographic Album for Artists & Designers, Dover, New York, NY, USA, 1966.
  16. L. Liu and P. Fieguth, “Texture classification from random features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 574–586, 2012. View at: Google Scholar
  17. J. Yang, Y. Zhuang, and F. Wu, “ESVC-based extraction and segmentation of texture features,” Computers & Geosciences, vol. 49, pp. 238–247, 2012. View at: Publisher Site | Google Scholar
  18. F. M. Khellah, “Texture classification using dominant neighborhood structure,” IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3270–3279, 2011. View at: Google Scholar
  19. B. M. Carvalho, T. S. Souza, and E. Garduno, “Texture fuzzy segmentation using adaptive affinity functions,” in Proceedings of the 27th Annual ACM Symposium on Applied Computing, pp. 51–53, Trento, Italy, March 2012. View at: Google Scholar
  20. I. J. Sumana, G. Lu, and D. Zhang, “Comparison of curvelet and wavelet texture features for content based image retrieval,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '12), pp. 290–295, July 2012. View at: Google Scholar
  21. T. Ojala, T. Maenpaa, M. Pietikainen, J. Viertola, J. Kyllonen, and S. Huovinen, “Outex–new framework for empirical evaluation of texture analysis algorithms,” in Proceedings of the 16th International Conference on Pattern Recognition (ICPR '02), vol. 1, pp. 701–706, August 2002. View at: Google Scholar
  22. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  23. P. Janney and Z. Yu, “Invariant features of local textures—a rotation invariant local texture descriptor,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), June 2007. View at: Publisher Site | Google Scholar
  24. D. E. Ilea and P. F. Whelan, “Image segmentation based on the integration of colourtexture descriptors—a review,” Pattern Recognition, vol. 44, no. 10-11, pp. 2479–2501, 2011. View at: Publisher Site | Google Scholar
  25. Y. Deng and B. S. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800–810, 2001. View at: Publisher Site | Google Scholar
  26. G. Paschos, “Perceptually uniform color spaces for color texture analysis: an empirical evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 6, pp. 932–937, 2001. View at: Publisher Site | Google Scholar
  27. F. Bianconi, A. Fernández, E. González, D. Caride, and A. Calviño, “Rotation-invariant colour texture classification through multilayer CCR,” Pattern Recognition Letters, vol. 30, no. 8, pp. 765–773, 2009. View at: Publisher Site | Google Scholar
  28. K. J. Dana, B. van Ginneken, S. K. Nayar, and J. J. Koenderink, “Reflectance and texture of real-world surfaces,” ACM Transactions on Graphics (TOG), vol. 18, no. 1, pp. 1–34, 1999. View at: Google Scholar
  29. L. Wang and D. He, “A new statistical approach for texture analysis,” PE&RS, vol. 56, no. 1, pp. 61247–61266, 1990. View at: Google Scholar
  30. H. Weschsler and M. Kidode, “A random walk procedure for texture discrimination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 3, pp. 272–280, 1979. View at: Google Scholar
  31. L. M. Kaplan, “Extended fractal analysis for texture classification and segmentation,” IEEE Transactions on Image Processing, vol. 8, no. 11, pp. 1572–1585, 1999. View at: Google Scholar
  32. J. Melendez, M. A. Garcia, D. Puig, and M. Petrou, “Unsupervised texture-based image segmentation through pattern discovery,” Computer Vision and Image Understanding, vol. 115, no. 8, pp. 1121–1133, 2011. View at: Publisher Site | Google Scholar
  33. K. I. Kilic and R. H. Abiyev, “Exploiting the synergy between fractal dimension and lacunarity for improved texture recognition,” Signal Processing, vol. 91, no. 10, pp. 2332–2344, 2011. View at: Publisher Site | Google Scholar
  34. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems IV, pp. 474–485, Morgan Kaufmann, Burlington, Mass, USA, 1994. View at: Google Scholar
  35. Digital Image Processing, R. C. Gonzalez and R. E. Woods Eds., Prentice Hall, Upper Saddle River, NJ, USA, 2008.
  36. A. Drimbarean and P. F. Whelan, “Experiments in colour texture analysis,” Pattern Recognition Letters, vol. 22, no. 10, pp. 1161–1167, 2001. View at: Publisher Site | Google Scholar
  37. A. Rosenfeld, C. Y. Wang, and A. Y. Wu, “Multispectral texture,” IEEE Transactions on Systems, Man and Cybernetics, vol. 12, no. 1, pp. 79–84, 1982. View at: Google Scholar
  38. A. Stolpmann and L. S. Dooley, “Genetic algorithms for automized feature selection in a texture classification system,” in Proceedings of the 4th International Conference on Signal Processing Proceedings (ICSP '98), vol. 2, pp. 1229–1232, October 1998. View at: Google Scholar
  39. J. T. Tou and R. C. Gonzalez, Pattern Recognition Principles, vol. 7, Image Rochester, New York, NY, USA, 1974.
  40. D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, 2002. View at: Publisher Site | Google Scholar
  41. M. Unser, “Texture classification and segmentation using wavelet frames,” IEEE Transactions on Image Processing, vol. 4, no. 11, pp. 1549–1560, 1995. View at: Publisher Site | Google Scholar
  42. Y. Dong and J. Ma, “Wavelet-based image texture classification using local energy histograms,” IEEE Signal Processing Letters, vol. 18, no. 4, pp. 247–250, 2011. View at: Publisher Site | Google Scholar
  43. M. Acharyya, R. K. De, and M. K. Kundu, “Extraction of features using M-band wavelet packet frame and their neuro-fuzzy evaluation for multitexture segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1639–1644, 2003. View at: Publisher Site | Google Scholar
  44. A. Safia, M. F. Belbachir, and T. Iftene, “A wavelet transformation for combining texture and color: application to the combined classification of the HRV SPOT images,” International Journal of Remote Sensing, vol. 27, no. 18, pp. 3977–3990, 2006. View at: Publisher Site | Google Scholar
  45. G. van de Wouwer, P. Scheunders, S. Livens, and D. van Dyck, “Wavelet correlation signatures for color texture characterization,” Pattern Recognition, vol. 32, no. 3, pp. 443–451, 1999. View at: Publisher Site | Google Scholar
  46. A. Mojsilovic, D. Rackov, and M. Popovic, “On the selection of an optimal wavelet basis for texture characterization,” in Proceedings of the International Conference on Image Processing (ICIP '98), vol. 3, pp. 678–682, October 1998. View at: Google Scholar
  47. G. Paschos and K. P. Valavanis, “A color texture based visual monitoring system for automated surveillance,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 29, no. 2, pp. 298–307, 1999. View at: Google Scholar
  48. C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Transactions on Multimedia, vol. 1, no. 3, pp. 264–277, 1999. View at: Google Scholar
  49. J. Chen, T. N. Pappas, A. Mojsilović, and B. E. Rogowitz, “Adaptive perceptual color-texture image segmentation,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1524–1536, 2005. View at: Publisher Site | Google Scholar
  50. X. Y. Wang, T. Wang, and J. Bu, “Color image segmentation using pixel wise support vector machine classification,” Pattern Recognition, vol. 44, no. 4, pp. 777–787, 2011. View at: Publisher Site | Google Scholar
  51. A. Sengur, “Wavelet transform and adaptive neuro-fuzzy inference system for color texture classification,” Expert Systems with Applications, vol. 34, no. 3, pp. 2120–2128, 2008. View at: Publisher Site | Google Scholar
  52. A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008. View at: Publisher Site | Google Scholar
  53. A. Emran, M. Hakdaoui, and J. Chorowicz, “Anomalies on geologic maps from multispectral and textural classification: the bleida mining district (Morocco),” Remote Sensing of Environment, vol. 57, no. 1, pp. 13–21, 1996. View at: Publisher Site | Google Scholar

Copyright © 2013 Safia Abdelmounaime and He Dong-Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

5717 Views | 1756 Downloads | 22 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.