Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 1673283, 17 pages
https://doi.org/10.1155/2018/1673283
Research Article

A Comparative Study on Discrete Shmaliy Moments and Their Texture-Based Applications

1Facultad de Ingeniería, Universidad Nacional Autónoma de México, Mexico City, Mexico
2Luxembourg Institute of Science and Technology (LIST), Belvaux, Luxembourg

Correspondence should be addressed to Boris Escalante-Ramírez; xm.manu@sirob

Received 31 March 2018; Accepted 8 July 2018; Published 9 August 2018

Academic Editor: Vincenzo Vespri

Copyright © 2018 Germán González et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In recent years, discrete orthogonal moments have attracted the attention of the scientific community because they are a suitable tool for feature extraction. However, the numerical instability that arises because of the computation of high-order moments is the main drawback that limits their wider application. In this article, we propose an image classification method that avoids numerical errors based on discrete Shmaliy moments, which are a new family of moments derived from Shmaliy polynomials. Shmaliy polynomials have two important characteristics: one-parameter definition that implies a simpler definition than popular polynomial bases such as Krawtchouk, Hahn, and Racah; a linear weight function that eases the computation of the polynomial coefficients. We use IICBU-2008 database to validate our proposal and include Tchebichef and Krawtchouk moments for comparison purposes. The experiments are carried out through 5-fold cross-validation, and the results are computed using random forest, support vector machines, naïve Bayes, and k-nearest neighbors classifiers.

1. Introduction

Over the past few years, discrete orthogonal moments (DOMs) have attracted the attention of the image analysis community because they possess the attribute of describing local and global features in images efficiently. The DOMs are optimal in the sense that they represent images with minimal information redundancy. The characteristic is originated by the orthogonality condition of the polynomial basis where the moments are computed [1]. Furthermore, contrary to the continuous moments, the DOMs are defined in the discrete domain. Hence, their computation does not require spatial quantization [2].

Families of DOMs such as Racah [3], dual Hahn [4], and Tchebichef [5] have been used successfully in many applications, for instance, feature analysis [6, 7], face recognition [8, 9], and image retrieval [10, 11] to name a few. Particularly, medical imaging has exploited orthogonal moments to characterize different types of biological tissue and assess the severity of several diseases [1216].

An important impulse that has placed them in the spotlight is the discussion of the numerical instability problem that occurs in high-order moments. Since the DOMs are calculated as the projection of the image on a weighted kernel or set of orthogonal polynomials, their computation is usually linked to the size of the image. The higher the order of the moment, the higher the numerical error.

Traditionally, the computation of the DOMs is carried out by the use of recursive equations. However, the methodology tends to accumulate and propagate errors that degrade the representation of the image. Other methodologies have tackled the issue. In [17], Mukundan presented a seminal paper where the symmetry and renormalization of the Tchebichef polynomials are used to reduce the accumulation of numerical errors; the discrete Krawtchouk moments [18] are defined on a set of weighted Krawtchouk polynomials to improve the numerical stability; and Bayraktar et al. [19] proposed computing an offline lookup table of Tchebichef and Krawtchouk polynomial coefficients to overcome calculation errors.

In addition to the aforementioned normalization methods, other approaches such as partitioning and sliding window have been also proposed. The former divides the image into smaller nonoverlapped sub-images where low-order moments are calculated independently, while the latter uses a rectangular window that slides usually from left to right and from top to down. On every window region low-order DOMs are computed.

A new class of discrete orthogonal moments derived from Shmaliy polynomials [20] was recently proposed. Shakibaei Asli and Flusser called this new class of moments “discrete Shmaliy moments” (DSMs) [21]. We highlight that the Shmaliy polynomial basis has two important characteristics: one-parameter definition that implies a simpler definition than popular polynomial bases such as Krawtchouk, Hahn, Dual Hahn, and Racah; a linear weight function that eases the computation of the polynomial coefficients. The Shmaliy polynomials are opposite to the Tchebichef polynomials, which use a symmetric nonlinear weight function.

So far, the experiments conducted by Shakibaei Asli and Flusser have shown the capability of the DSMs as feature descriptors in one dimension. In this paper, we explore the use of the discrete Shmaliy moments as 2D texture descriptors and present an extensive comparative study using the IICBU-2008 database [22]. A quantitative analysis based on N-way ANOVA is also conducted to support our proposal.

In Section 2, the mathematical theory of DOMs is presented. We introduce DSMs as the main part of this paper. Tchebichef and Krawtchouk moments are also briefly described for comparison purposes. The rest of this paper is organized as follows: in Section 3, we review the texture model and the general classification scheme; the image database is described in Section 4; and in Section 5, the experimental results are summarized. Finally, Section 6 contains concluding statements.

2. Discrete Shmaliy Moments

In 1980, Teague defined the orthogonal moments in the continuous space [23]. Since then, many variations have been published and used successfully. For example, Zernike and Legendre moments have been used as detectors of invariant features [2426]. However, the implementation of a continuous basis involves a discrete approximation that affects the properties of invariance and orthogonality. In addition, in some cases, the coordinated space of the image must be transformed in order to compute the continuous moments [27, 28].

Generally speaking, the orthogonal moments are a set of scalar quantities that are not correlated among them. They are useful for characterizing local and global features in images [29]. The orthogonal moments are computed as the projection of the image on an orthogonal polynomial basis . Formally, they are defined as follows: . One way of interpreting the projection is as the correlation measure between the image and the polynomial basis [15]. On the other hand, the discrete orthogonal moments overcome the aforementioned limitations because they still satisfy the property of orthogonality and are also defined in the same image domain.

Discrete Shmaliy polynomials were developed as unbiased finite impulse responses for predictive FIR filters [30] and have been applied on blind fitting in finite-length data [20]. The discrete Shmaliy polynomials have two important characteristics. Their definition is simple because it requires only one parameter that represents the length of the data, and they use a linear weight function, which eases the computation. In addition, the discrete Shmaliy polynomials represent the discrete version of the radial Mellin polynomials when the length of the signal approaches infinity [31].

The pth-order Shmaliy polynomial is defined as follows: where is the length of the data, is the Pochhammer symbol with , , and the hypergeometric function is given by

In practice, the weighted discrete Shmaliy polynomials [21] are used to minimize the instability at high-order moments. The orthogonal basis is built as follows: where is the weight function and defines the norm.

The weighted discrete Shmaliy polynomials satisfy the following orthogonality condition:

According to Shakibaei Asli and Flusser, it is possible to define multivariate polynomials (2D and 3D) as products of the univariate polynomials (1D) in each dimension.

Therefore, using (3), the discrete Shmaliy moments of order of the image are defined as where and are the size of the image on - and -axes, respectively. 1D and 2D discrete Shmaliy polynomials are shown in Figures 1(a) and 1(b), respectively.

Figure 1: According to Shakibaei Asli and Flusser [21], the 2D discrete Shmaliy polynomials are obtained as the product of two unidimensional polynomials. (a) 1D and (b) 2D weighted discrete Shmaliy polynomials from order 0 to 4.
2.1. Other Discrete Orthogonal Moments

In order to conduct a comparative study, we also use two well-known discrete orthogonal moments to compare with. A brief introduction of the basic theory of the discrete Tchebichef moments [5] and the discrete Krawtchouk moments [32] is presented in this section.(a)The discrete Tchebichef polynomials are defined in terms of hypergeometric functions as follows: where .The property of orthogonality of the discrete Tchebichef polynomials is satisfied by where is the Kronecker delta and the normalizing factor is defined as For a practical and stable implementation, the normalized discrete Tchebichef polynomials (see Figure 2(a)) are used to build the following basis: Using (9), the discrete Tchebichef moments (DTMs) of the image are computed as where and are the image size on - and -axes, respectively.(b)Discrete Krawtchouk Polynomials [32] constitute an orthogonal polynomial basis related to the binomial distribution. The classic nth-order Krawtchouk polynomial is defined as where and .For stable computation, the weighted discrete Krawtchouk polynomials determine the basis as follows: where is the weight function and is the norm.The weighted discrete Krawtchouk polynomials satisfy the orthogonality condition.From (12), the discrete Krawtchouk moments (DKMs) of the image are defined as (see Figure 2(b)) where and are the size of the image on - and -axes, respectively.DKMs are able to extract local features when the parameter varies between to .

Figure 2: Unidimensional discrete orthogonal polynomials. (a) Tchebichef and (b) Krawtchouk polynomial bases from order 0 to 4.
2.2. Computational Aspects of the Discrete Orthogonal Moments

We have mentioned that a key issue of the orthogonal polynomials is the numerical instability when the order of the polynomial becomes large. However, another important problem is the computational cost that tends to be high and limits the use of the orthogonal moments when an on-line computation is required [33].

There are many different ways to compute the coefficients of the orthogonal polynomials, i.e., by the hypergeometric definition [34], in terms of the Rodrigues formula [35], using the closed form representation, or by iterating the recurrence equations [36].

Morales-Mendoza et al. [20] showed that the orthogonal Shmaliy polynomials satisfy the three-term recurrence relation. This relation can be used for the numerical implementation of the polynomials. The three terms are the length of the signal, the polynomial order, and the signal index. As we mentioned in Section 2, the discrete Shmaliy polynomials have two main advantages in comparison to Tchebichef and Krawtchouk polynomials: one-parameter recursion definition and linear weight function.

We conducted time complexity experiments to compare the recursive implementation of Shmaliy, Tchebichef, and Krawtchouk polynomials. Specifically, the implementation of the weighted discrete Shmaliy polynomials is based on the n-recursion proposal by Asli and Flusser [21]; the normalized discrete Tchebichef polynomials are computed with the x-recursion formula proposed by Mukundan [5, 17]; and the weighted discrete Krawtchouk polynomials are calculated using the n-recursion definition proposed by Yap et al. [18].

Figure 3 shows the mean time computation of the discrete polynomials up to the 100th order; each polynomial basis was calculated 1000 times. The computation of the Shmaliy polynomials is faster than Tchebichef and Krawtchouk polynomials. Specifically, in comparison to Krawtchouk, Shmaliy polynomials have the same type of recursion but Krawtchouk polynomials depend on two parameters that may affect the computation time. Tchebichef polynomials use different recursion implementation; therefore, it is not adequate to make a direct comparison.

Figure 3: Mean computation time in milliseconds of the discrete Shmaliy, Tchebichef, and Krawtchouk polynomials.

Asli and Flusser [21] performed a time comparison test between Tchebichef and Shmaliy moments. They also compared the hypergeometric and n- recursion implementations and showed that the recursive method is faster than the hypergeometric definition. They concluded that there is no significant differences between the time complexity of Shmaliy and Tchebichef moments. The recursive computation of DOMs and their optimization is a state-of-the-art topic; i.e., Shu et al. [33] studied the computational and time complexity of the recursive computation of Tchebichef moments. They also compared and contrasted their proposal with Kotoulas and Andreadis [37], Papakostas et al. [38], and Wang and Wang [39] recursive methods.

3. Texture Description Based on Discrete Shmaliy Moments

Texture classification involves two main assignments: feature extraction and similarity measure. This paper is focused on feature extraction; however, the target is the correct classification of the image database described in Section 4. A complete scheme of classification is explained below and is based on statistical descriptors through local texture analysis using overlapping square windows.

3.1. Textural Features

The discrete orthogonal moments, described previously in Section 2, are scalar quantities that characterize a specific function. For example, in accordance with (5), the value of the DSMs is determined by the correlation between the image and the polynomial basis , which is calculated using the weighted discrete Shmaliy polynomials.

The polynomial basis acts as a filter for the DOMs. The value of the moment is larger when the variations of the image are similar to in X and Y directions. This characteristic is important for the analysis of the textures because the texture is defined as the spatial repetition of gray-level patterns in a region [40]. Therefore, it is possible to obtain a description of the texture when the magnitude dependency of the sth-order moment is evaluated. The description is related to the frequency responses of the polynomial basis. Thus, the textural features based on the discrete Shmaliy moments are computed as where and is defined in (5).

Marcos and Cristóbal [29] performed a validation process with DTMs as texture descriptors. They compared their proposal against standard methods such as Haralick cooccurrence matrix [41], Gaussian filtering [42], and local binary patterns [43]. The results showed that DTMs collect essential texture information with similar results compared to conventional methods.

The textural features could be computed using any of the polynomial basis mentioned on Section 2, and they are used in the experiments of Section 5.

3.2. Statistical Textural Features

In [29], the authors calculated one textural feature vector per image that implies the computation of high-order moments. According to Mukundan [17], this way of computing moments may produce overflow calculations due to unstable oscillations of the polynomials. The numerical instability issue may appear in Krawtchouk and Tchebichef polynomial bases and is one of the most important reasons why DOMs have not been applied widely on texture description tasks.

Our proposal includes a modification based on overlapping square windows that shift over the image. This modification avoids the computation of high-order moments and assures numerical stability. Since overlapping windows generate an over-description of the image, we calculate the corresponding first three central moments (see Figure 4 that shows the overlapping window description).

Figure 4: The description of the textures is based on sliding windows that move top-to-down and left-to-right. On every window position, the corresponding orthogonal moments are computed.

The statistical textural feature, , based on the textural features, , is calculated for each window where is the window position, usually left-to-right and top-to-down.

This description scheme is useful in spite of the differences of size of inter- and intra-classes. The vector is built as follows: where , , and are the mean, standard deviation, and kurtosis, respectively.

We can say that (16) describes the texture. The classification scheme shows how the statistical textural features are computed for each image (Figure 5). Once the features are calculated, they are used to determine the classification rule.

Figure 5: General scheme of our proposal to classify texture images. After computing local DOMs using sliding windows, the statistical textural feature vectors are computed and classified.
3.3. Linear Discriminant Analysis

The computation of the statistical textural features involves large vectors because the textural features (Section 3.1) have length of bins, where is the window size. Therefore, an overfitting may occur, which consists on an excessive adjustment of the classification algorithm leading to inaccurate results when new data are processed [44, 45].

In order to avoid that issue, Fisher proposed the discriminant analysis [46] for binary problems. This method is one of the most used for dimensionality reduction. Linear discriminant analysis (LDA) projects feature vectors into clusters in such a way that preserves the data geometry; LDA is defined as follows: are the samples of classes; then the Fisher’s projection is given by vector that maximizes

The between-scatter matrix is where and is the general average.

On the other hand, the within class scatter matrix is defined by where .

Finally, the optimal projection matrix is more relevant eigenvalues of .

4. IICBU-2008 Benchmark

IICBU-2008 benchmark was presented by Shamir [22]. It is composed of 11 datasets, which represent a wide range of biological imaging problems. The image format of the datasets is TIFF (Tagged Image File Format) of 8 or 16 bits, and the image sizes vary from to pixels. The datasets utilized on this study are shown in Table 1. Next, we present a short review of the datasets used in this study.

Table 1: Characteristics of the datasets included in the IICBU-2008 benchmark.
4.1. Binucleate

The dataset was acquired for classification purposes of binucleate cellular phenotype and normal cells. Binucleate phenotype is associated with failures in cell division and is a target when specialists look for genes that have effects on cell division. Many agents used on chemotherapy also alter cell division. The cells included in this dataset belong to Drosophila melanogaster and are shown in Figure 6.

Figure 6: The two classes defined in the Binucleate dataset are (a) binucleate and (b) nonbinucleate.
4.2. Celegans

The full name of the dataset is Caenorhabditis elegans muscle aging and its principal purpose is to estimate the age of nematodes using images of muscle tissue. The images were captured with a light microscope at different ages. The images were assigned to the respective chronological age, rather than the morphological age. These ages are not fully correlated because the physical development among individuals of the same species is not the same even though they may have the same chronological age.

Celegans has a different purpose than the rest of the datasets included in the benchmark IICBU-2008. The principal goal is to find a variable that correlates morphological and chronological ages. The ages vary in the range of 1, 2, 4, and 8 days. A sample of each class is shown in Figure 7.

Figure 7: The chronological ages of the Celegans dataset: (a) 1, (b) 2, (c) 4, and (d) 8 days.
4.3. CHO and HeLa

The Chinese Hamster Ovary (CHO) dataset was published in 1998 [47]. CHO is focused on the identification of subcellular organelles and cellular structures. Figure 8 shows one slice of each CHO class. HeLa was published in 2001 [48] as an extension of CHO dataset and supplies new structures for identification task. Figure 9 contains one slice of each HeLa class. These images were originally dedicated to find the location of proteins in cells. The location of one protein in relation to other proteins or functional structures points out the protein purpose.

Figure 8: Classes of the CHO dataset: (a) Anti-giantin, (b) Hoechst 33258 (DNA), (c) Anti-lamp2, (d) Anti-nop4, and (e) Anti-tubulin.
Figure 9: Classes of HeLa dataset: (a) Actin, (b) DNA (Nuclei), (c) TfR (Endosome), (d) ER (Endoplasmic reticulum), (e) Giantin (cis/medial Golgi), (f) GPP130 (cis Golgi), (g) Lamp2 (Lysosomes), (h) Tubulin, (i) Mitochondria, and (j) Nucleolin (Nucleoli).
4.4. Liver Aging

The Atlas of Gene Expression in Mouse Aging Project (AGEMAP) is a study performed by the National Institute of Aging and the National Institute of Health in the United States. This study involves 48 mice, males and females, with ad libitum (with no restriction) and caloric restriction diet and different ages: 1, 6, 12, and 24 months. The livers were extracted from sacrificed mice; they were stained with Hematoxylin/Eosin (H+E). The images were acquired through brightfield microscopy technique. Figure 10 shows slices of different liver ages. The AGEMAP images could be analyzed on different manners: age, gender, and diet.

Figure 10: The images belong to ad libitum diet from with Liver Aging database: (a) 1, (b) 6, (c) 16, and (d) 24 months.
4.5. Lymphoma

The hematologic diseases include different types of leukemia and lymphomas. Although these diseases are not usual, they have aroused medical interest in recent years. For that reason, this dataset is one of the most popular databases because it has been used on several studies for classification [49, 50] and registration [51]. The whole image dataset is stained with Hematoxylin/Eosin (H+E). Lymphoma database is composed of three types of hematologic diseases: chronic lymphocytic leukemia (CLL) with 133 samples, follicular lymphoma (FL) with 139, and mantle cell lymphoma (MCL) with 122. Figure 11 shows slices of the Lymphoma dataset.

Figure 11: Classes of Lymphoma database: (a) CLL, (b) FL, and (c) MCL.

5. Experimental Results

During the preprocessing stage, color images in Lymphoma and Liver aging datasets were transformed into gray scale and the size of the images was reduced by 50%.

In addition, the backgrounds of the images in C. elegans, CHO, and HeLa (Figures 7, 8, and 9, respectively) have irrelevant information. For this reason, the background content was discarded through the analysis of the overlapping windows in the following manner: if at least 50% of the pixels within the window contain information, it means the window is not background and the statistical textural features are computed. Otherwise, the window is dismissed.

For local analysis, every image is divided into overlapping square windows. We varied the size of the windows from to pixels, the step of window size is pixels, and the overlap is 50%. This process is shown in Figure 4. For Krawtchouk moments, ; see (14).

The overlapping windows are processed as Figure 5 depicts. The textural features, see (15), are calculated on every overlapping window. The proposed statistical textural features are computed according to their general definition given in (16).

Linear discriminant analysis, described in Section 3.3, is applied to the statistical textural features before the classification and validation process. Afterwards, the feature vectors are mean-centered (zero mean and unit standard the error). Keep in mind that the resulting vectors from LDA are length, where is the number of classes. The new feature vectors are shown in Figure 12. Note that CHO and HeLa datasets are partially shown because their dimensionality exceeds the three-dimensional representation with 4 and 9 features, respectively.

Figure 12: The statistical textural features are transformed with linear discriminant analysis. Note that the projections obtained improve the quality of the clusters and the boundaries among them.

The classification of the datasets is performed through 5-Fold cross-validation. We use five classifiers: k-nearest neighbors (KNN) [52], support vector machines (SVM) with linear and Gaussian kernels [53, 54], naïve Bayes [55], and random forest (RF) [56].

The mean accuracy results for each dataset are shown in the following figures: Binucleate results in Figure 13, Celegans in Figure 14, CHO in Figure 15, HeLa in Figure 16, Liver aging in Figure 17, and Lymphoma in Figure 18.

Figure 13: Mean accuracy results for Binucleate dataset. This dataset consists of two classes. The dataset has the particularity that LDA projects and separates the classes in such a way that the classification becomes trivial. However, when SVM-rbf is used, there is a small error caused by the influence of the parameters of the Gaussian kernel.
Figure 14: Mean accuracy results for Celegans dataset. The dataset consists of four classes. The best results were obtained when the size of the local window is bigger than 20.
Figure 15: Mean accuracy results for CHO dataset. The dataset consists of five classes of cellular structures of hamster ovary. On this dataset, Shmaliy and Tchebichef features show a similar and uniform performance when the window size is bigger than 30.
Figure 16: Mean accuracy results for HeLa dataset. The dataset, as an extension of CHO database, consists of ten classes of cellular structures of hamster ovary. The best results were obtained when the size of the local window is bigger than 40.
Figure 17: Mean accuracy results for Liver Aging dataset. This dataset consists of four classes. The dataset has the particularity that classifiers perform similarly among them, and the accuracy increases with larger windows.
Figure 18: Mean accuracy results for Lymphoma dataset. The dataset consists of three classes of hematological diseases. The best results were obtained when the size of the local window is bigger than 40.

Every subfigure in Figures 1318 shows the results for a specific window size. Red bars correspond to Shmaliy moments, blue bars to Tchebichef moments, and green bars to Krawtchouk moments. The vertical lines that appear at the center of the bars represent the standard deviations computed by averaging the cross-validated results.

5.1. ANOVA Analysis

N-way ANOVA [57], or factorial ANOVA, is performed in order to find out if the choice of the window size, classifier, and DOM is statistically significant because the classification results are very similar to each other.

ANOVA compares the means between groups and resolves if any of them has significant statistical difference from each other. ANOVA tests the null hypothesis, , where is the mean of group . In other words, the null hypothesis is that any group election does not have relevance. If this hypothesis is rejected, the alternative hypothesis, , is accepted and implies that at least two group means are significantly different from each other.

In our experiments, the accuracy is the dependent variable. The window size, classifier, and discrete orthogonal moment are the three nominal or independent variables; it means . N-way ANOVA can also examine the interaction between the independent variables because the interactions show that the differences between the groups are not uniform in the categories of the independent variables. The result after applying ANOVA is the p-values; one p-value is computed for each nominal variable and the interactions among them. In general, if the p-value is less than 5%, the null hypothesis is rejected. Table 2 shows the p-values, in percentages, of independent variables and their interactions.

Table 2: -values of N-way ANOVA.

Based on Table 2, N-way ANOVA shows that the independent variables are statistically significant, separately, for all image databases.

First, the window size is important because the larger the size, the better the overall accuracy classification results. The classifier is also statistically relevant; it shows that better results are obtained, in general, with both naïve Bayes and SVM (linear and Gaussian) classifiers. DOMs have also a statistical significance. The best classification results for Celegans and HeLa were obtained with Krawtchouk moments. For CHO the best results were obtained with Tchebichef moments, and for Liver aging and Lymphoma discrete Shmaliy moments scored the best results. This situation may occur because the datasets are more correlated to a certain polynomial basis.

It is important to mention that Binucleate dataset was not included in ANOVA because, for all the parameter combinations, the classification results were very high.

The best results from our experiments are compared with the state-of-the-art works: Shamir et al. [22, 58], Siji et al. [59], and Meng et al. [60] classified the same datasets from IICBU-2008 benchmark.

Shamir et al. presented IICBU-2008 benchmark in [22]. They also used a popular tool called WND-CHRM [61, 62] to extract and classify different feature sets, for example, Radon transform, Gabor coefficients, multiscale histograms, and Tamura texture features. Afterwards, Shamir et al. used Fourier, discrete Tchebichef moments, and Wavelet transforms up to fourth order to extract texture features; WND-CHRM was used again to perform the classification. They reported results with a subset of data, including CHO and HeLa datasets.

Siji et al. proposed to apply principal component analysis (PCA) and Fisher analysis score based on feature selection for image retrieval applications. They used the same features computed by WND-CHRM for Binucleate, Celegans, CHO, and HeLa databases. They used naïve Bayes and SVMs as classifiers.

Meng et al. proposed a classification model called collateral representative subspace projection modeling (C-RSPM) for Liver Aging and Lymphoma datasets.

Table 3 summarizes and compares the results of our proposal and the ones reported in the state-of-the-art.

Table 3: Comparison results between discrete orthogonal moments and state-of-the-art methods.

6. Discussion and Conclusions

Since their recent introduction, Shmaliy moments have not been treated or tested on image texture analysis. In this paper, we show for the first time the capability of Shmaliy moments for computing texture features and compare them with the most popular families of discrete orthogonal moments: Tchebichef and Krawtchouk moments. Moreover, the results are compared with a set of methods that use the IICBU-2008 benchmark.

The proposed classification method is sufficiently general and useful in many different types of images. We included a preprocessing stage that consists in color space transformation and, in some cases, resizing the datasets to accelerate the description and classification processes. Our proposal is robust and it may be useful in many different cases that not necessarily belong to the biomedical field.

The classification results obtained by discrete Shmaliy moments are similar to the ones achieved with Tchebichef and Krawtchouk moments; therefore, a 3-way ANOVA was applied in order to verify if the parameter selection (descriptor, window size, and moment) was significant. ANOVA suggests that the descriptor combination is the most important parameter because the null hypothesis was rejected for all databases while the discrete orthogonal moment and the window size elections show equal low significance since the null hypothesis was rejected only on three of five databases.

On the other hand, comparison among discrete orthogonal moments and other techniques found in the literature shows competitive results. Accordingly, Shmaliy moments have shown the same capability for texture description as other classical discrete polynomial bases; therefore, its consideration is pertinent for texture analysis, and probably a combination of Shmaliy texture features and other discrete moments could be the topic for a future work.

Data Availability

To download the IICBU-2008 benchmark, use https://ome.grc.nia.nih.gov/iicbu2008/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been sponsored by UNAM Grants PAPIIT IN116917 and SECITI 110/2015. Germán González thanks CONACYT263921 Scholarship.

References

  1. H. Zhu, M. Liu, H. Shu, H. Zhang, and L. Luo, “General form for obtaining discrete orthogonal moments,” IET Image Processing, vol. 4, no. 5, pp. 335–352, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. R. Mukundan, S. H. Ong, P. A. Lee, and vs. Discrete, “Continuous orthogonal moments for image analysis,” in Proceedings of the International conference, Imaging science, systems, and technology, pp. 23–29, 2001.
  3. H. Zhu, H. Shu, J. Liang, L. Luo, and J.-L. Coatrieux, “Image analysis by discrete orthogonal Racah moments,” Signal Processing, vol. 87, no. 4, pp. 687–708, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Zhu, H. Shu, J. Zhou, L. Luo, and J. L. Coatrieux, “Image analysis by discrete orthogonal dual Hahn moments,” Pattern Recognition Letters, vol. 28, no. 13, pp. 1688–1704, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Transactions on Image Processing, vol. 10, no. 9, pp. 1357–1364, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. J. Zhou, H. Shu, H. Zhu, C. Toumoulin, and L. Luo, “Image analysis by discrete orthogonal Hahn moments,” in Image Analysis and Recognition, vol. 3656 of Lecture Notes in Computer Science, pp. 524–531, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  7. I. Batioua, R. Benouini, K. Zenkouar, and H. E. Fadili, “Image analysis using new set of separable two-dimensional discrete orthogonal moments based on Racah polynomials,” Eurasip Journal on Image and Video Processing, vol. 2017, no. 1, 2017. View at Google Scholar · View at Scopus
  8. N. Farajzadeh, K. Faez, and G. Pan, “Study on the performance of moments as invariant descriptors for practical face recognition systems,” IET Computer Vision, vol. 4, no. 4, pp. 272–285, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. P. Kaur and H. S. Pannu, “Comparative analysis of continuous and discrete orthogonal moments for face recognition,” in Proceedings of the 2017 International Conference of Electronics, Communication and Aerospace Technology (ICECA), vol. 1, pp. 449–453, Coimbatore, India, April 2017. View at Publisher · View at Google Scholar
  10. C. Di Ruberto and A. Morgera, “Moment-based techniques for image retrieval,” in Proceedings of the DEXA 2008, 19th International Conference on Database and Expert Systems Applications, pp. 155–159, Italy, September 2008. View at Scopus
  11. J. C. Patra, S. Sood, P. K. Meher, and C. Bornand, “Content based image retrieval using orthogonal moments heuristically,” in Proceedings of the 2008 IEEE International Conference on Systems, Man and Cybernetics, SMC 2008, pp. 512–517, Singapore, October 2008. View at Scopus
  12. V. S. Bharathi and L. Ganesan, “Orthogonal moments based texture analysis of CT liver images,” Pattern Recognition Letters, vol. 29, no. 13, pp. 1868–1872, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. K. H. Thung, S. C. Ng, C. L. Lim, and P. Raveendran, “A Preliminary Study of Compression Efficiency and Noise Robustness of Orthogonal Moments on Medical X-Ray Images,” in Proceedingsof the 5th Kuala Lumpur International Conference on Biomedical Engineering 2011, vol. 35, pp. 587–590, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. View at Publisher · View at Google Scholar
  14. K. M. Hosny, G. A. Papakostas, and D. E. Koulouriotis, “Accurate reconstruction of noisy medical images using orthogonal moments,” in Proceedings of the 2013 18th International Conference on Digital Signal Processing, DSP 2013, pp. 1–6, Greece, July 2013. View at Scopus
  15. R. Nava, G. González, J. Kybic, and B. Escalante-Ramírez, “Characterization of hematologic malignancies based on discrete orthogonal moments,” in Proceedings of the 6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016, pp. 1–6, Finland, December 2016. View at Scopus
  16. R. Nava, G. González, J. Kybic et al., “Classification of tumor epithelium and stroma in colorectal cancer based on discrete Tchebichef moments,” in Clinical Image-Based Procedures. Translational Research in Medical Imaging, L. Oyarzun, C. Shekhar, and R. Wesarg, Eds., Springer International Publishing, Cham, Switzerland, 2016. View at Google Scholar
  17. R. Mukundan, “Some computational aspects of discrete orthonormal moments,” IEEE Transactions on Image Processing, vol. 13, no. 8, pp. 1055–1059, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. P. Yap, R. Paramesran, and S. Ong, “Image analysis by Krawtchouk moments,” IEEE Transactions on Image Processing, vol. 12, no. 11, pp. 1367–1377, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. B. Bayraktar, T. Bernas, J. Paul Robinson, and B. Rajwa, “A numerical recipe for accurate image reconstruction from discrete orthogonal moments,” Pattern Recognition, vol. 40, no. 2, pp. 659–669, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. L. J. Morales-Mendoza, H. Gamboa-Rosales, and Y. S. Shmaliy, “A new class of discrete orthogonal polynomials for blind fitting of finite data,” Signal Processing, vol. 93, no. 7, pp. 1785–1793, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. B. H. Shakibaei Asli and J. Flusser, “New discrete orthogonal moments for signal analysis,” Signal Processing, vol. 141, pp. 57–73, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. L. Shamir, N. Orlov, D. Mark Eckley, T. J. Macura, and I. G. Goldberg, “IICBU 2008: A proposed benchmark suite for biological image analysis,” Medical & Biological Engineering & Computing, vol. 46, no. 9, pp. 943–947, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. M. R. Teague, “Image analysis via the general theory of moments,” Journal of the Optical Society of America, vol. 70, no. 8, pp. 920–930, 1980. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 5, pp. 489–497, 1990. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Mukundan and K. R. Ramakrishnan, “Fast computation of Legendre and Zernike moments,” Pattern Recognition, vol. 28, no. 9, pp. 1433–1442, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  26. Å. Wallin and O. Kübler, “Complete Sets of Complex Zernike Moment Invariants and the Role of the Pseudoinvariants,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 11, pp. 1106–1110, 1995. View at Publisher · View at Google Scholar · View at Scopus
  27. R. Mukundan and K. R. Ramakrishnan, Moment functions in image analysis, theory and applications, World Scientific, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  28. N. M. Temme, Special functions: An introduction to the classical functions of mathematical physics, John Wiley and Sons, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  29. J. V. Marcos and G. Cristóbal, “Texture classification using discrete Tchebichef moments,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 30, no. 8, pp. 1580–1591, 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Heinonen and Y. Neuvo, “FIR-Median Hybrid Filters with Predictive FIR Substructures,” IEEE Transactions on Signal Processing, vol. 36, no. 6, pp. 892–899, 1988. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. Sheng and L. Shen, “Orthogonal fourier-mellin moments for invariant pattern recognition,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 11, no. 6, pp. 1748–1757, 1994. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Krawtchouk, “On interpolation by means of orthogonal polynomials,” Memoirs Agricultural Inst. Kyiv, vol. 4, pp. 21–28, 1929. View at Google Scholar
  33. H. Shu, H. Zhang, B. Chen, P. Haigron, and L. Luo, “Fast computation of Tchebichef moments for binary and grayscale images,” IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3171–3180, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  34. A. F. Nikiforov and V. B. Uvarov, Special Functions of Mathematical Physics, Birkhäauser, Basel, Switzerland, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  35. Mathematics and social utopias in France: Olinde Rodrigues and his times, American Mathematical Society, Providence, RI, USA; London Mathematical Society, London, UK, 2005.
  36. S. H. Abdulhussain, A. R. Ramli, S. A. R. Al-Haddad, B. M. Mahmmod, and W. A. Jassim, “On Computational Aspects of Tchebichef Polynomials for Higher Polynomial Order,” IEEE Access, vol. 5, pp. 2470–2478, 2017. View at Publisher · View at Google Scholar · View at Scopus
  37. L. Kotoulas and I. Andreadis, “Fast moment generating architectures,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 4, pp. 533–537, 2008. View at Publisher · View at Google Scholar · View at Scopus
  38. G. A. Papakostas, D. E. Koulouriotis, and E. . Karakasis, “A unified methodology for the efficient computation of discrete orthogonal image moments,” Information Sciences, vol. 179, no. 20, pp. 3619–3633, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  39. W. Guobao and W. Shigang, “Recursive computation of Tchebichef moment and its inverse transform,” Pattern Recognition, vol. 39, no. 1, pp. 47–56, 2006. View at Publisher · View at Google Scholar · View at Scopus
  40. M. Tuceryan, A. K. Jain, and et al, Texture Analysis. Handbook of pattern recognition and computer vision 2, 1993.
  41. R. M. Haralick, K. Shanmugam et al., “Textural features for image classification,” in IEEE Transactions on systems, man, and cybernetics SMC-3(6), pp. 610–621, 1973. View at Google Scholar
  42. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 2, no. 7, pp. 1160–1169, 1985. View at Publisher · View at Google Scholar · View at Scopus
  43. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at Publisher · View at Google Scholar · View at Scopus
  44. M. Li and R. C. Staunton, “Optimum Gabor filter design and local binary patterns for texture segmentation,” Pattern Recognition Letters, vol. 29, no. 5, pp. 664–672, 2008. View at Publisher · View at Google Scholar · View at Scopus
  45. J. A. Lee and M. Verleysen, Nonlinear Dimensionality Reduction, Information Science and Statistics, Springer, New York, NY, USA, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  46. R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of Eugenics, vol. 7, pp. 179–188, 1936. View at Publisher · View at Google Scholar
  47. M. V. Boland, M. K. Markey, and R. F. Murphy, “Automated recognition of patterns characteristic of subcellular structures in fluorescence microscopy images,” Cytometry, vol. 33, no. 3, pp. 366–375, 1998. View at Publisher · View at Google Scholar · View at Scopus
  48. M. V. Boland and R. F. Murphy, “A neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells,” Bioinformatics, vol. 17, no. 12, pp. 1213–1223, 2002. View at Google Scholar · View at Scopus
  49. C. Di Ruberto, G. Fodde, and L. Putzu, “Comparison of statistical features for medical colour image classification,” in International Conference on Computer Vision Systems, pp. 3–13, Springer, 2015. View at Google Scholar
  50. B. Zhang and W. Lu, “Classification of Malignant Lymphomas by Classifier Ensemble with Multiple Texture Features,” in Life System Modeling and Intelligent Computing (book), vol. 6330 of Lecture Notes in Computer Science, pp. 155–164, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. View at Publisher · View at Google Scholar
  51. L. P. Coelho, J. D. Kangas, A. W. Naik et al., “Determining the subcellular location of new proteins from microscope images using local features,” Bioinformatics, vol. 29, no. 18, pp. 2343–2349, 2013. View at Publisher · View at Google Scholar · View at Scopus
  52. N. S. Altman, “An introduction to kernel and nearest-neighbor nonparametric regression,” The American Statistician, vol. 46, no. 3, pp. 175–185, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
  53. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “Training algorithm for optimal margin classifiers,” in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (COLT '92), pp. 144–152, July 1992. View at Scopus
  54. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  55. P. Langley, W. Iba, and K. Thompson, “An analysis of Bayesian classifiers,” vol. 90, pp. 223–228, Association for the Advancement of Artificial Intelligence (Aaai), 1992. View at Google Scholar
  56. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at Publisher · View at Google Scholar · View at Scopus
  57. O. J. Dunn and V. A. Clark, Applied statistics: analysis of variance and regression, John Wiley & Sons, 1974. View at MathSciNet
  58. L. Shamir, N. Orlov, and I. G. Goldberg, “Evaluation of the informativeness of multi-order image transforms,” in Proceedings of the 2009 International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV 2009, pp. 37–42, USA, July 2009. View at Scopus
  59. K. K. Siji, B. Mathew, R. Chandran, B. S. Shajeemohan, and K. S. Shanthini, “Feature selection, optimization and performance analysis of classifiers for biological images,” in Proceedings of the 2014 IEEE National Conference on Communication, Signal Processing and Networking, NCCSN 2014, India, October 2014. View at Scopus
  60. T. Meng, L. Lin, M.-L. Shyu, and S.-C. Chen, “Histology image classification using supervised classification and multimodal fusion,” in Proceedings of the 2010 IEEE International Symposium on Multimedia, ISM 2010, pp. 145–152, Taiwan, December 2010. View at Scopus
  61. N. Orlov, L. Shamir, T. Macura, J. Johnston, D. M. Eckley, and I. G. Goldberg, “WND-CHARM: multi-purpose image classification using compound image transforms,” Pattern Recognition Letters, vol. 29, no. 11, pp. 1684–1693, 2008. View at Publisher · View at Google Scholar · View at Scopus
  62. L. Shamir, N. Orlov, D. M. Eckley, T. Macura, J. Johnston, and I. G. Goldberg, “Wndchrm—an open source utility for biological image analysis,” Source Code for Biology and Medicine, vol. 3, article 13, 2008. View at Publisher · View at Google Scholar · View at Scopus