Abstract

Scale invariants of Tchebichef moments are usually achieved by a linear combination of corresponding invariants of geometric moments or via an iterative algorithm to eliminate the scale factor. According to the properties of Tchebichef polynomials, we propose a new approach to construct scale invariants of Tchebichef moments. An algorithm based on matrix multiplication is also provided to efficiently compute the 3D moments and invariants. Several experiments are carried out to validate the effectiveness of our descriptors and algorithm.

1. Introduction

Since the introduction of geometric moment invariants by Hu [1], moments and moment’s functions have been extensively applied in pattern recognition [2, 3]. Due to the nonorthogonal kernel function of geometric moments, they suffer from high degree of information redundancy and are sensitive to noise, especially when higher order moments are concerned. Teague [4] introduced orthogonal continuous Legendre and Zernike moments, which can represent the image with minimal information redundancy and can be easily used for image reconstruction. The main drawback of the aforementioned moments is the discretization error, which accumulates with the increasing of moment order [5]. To resolve this problem, discrete orthogonal polynomials have been utilized to construct moments, such as Tchebichef [6], Krawtchouk [7], and Hahn [8]. Their basis functions exactly satisfy the orthogonality, which means they do not require any numerical approximation and spatial domain transformation. It makes them superior to the conventional continuous moments in terms of image representation capability.

Recently, the problem of moment invariance has been extensively investigated. For example, the invariants of Legendre moments have been achieved through image normalization method and indirect method. Chong et al. [9] proposed a direct method to construct the translation and scale Legendre moment invariants. Ong et al. [10] generalized this method to 3D directly. Following this way, Zhu et al. [11] derived translation and scale invariants of Tchebichef moments. However, Khalid pointed out that one weakness of this method is the high computational cost, especially when image size is large and higher order moments are concerned. Therefore, this method is suitable only for a set of binary images with small size [12]. Another difficulty of aforementioned scale invariants is how to determinate their parameters, which need to be selected carefully to keep a compromise between numerical stability and complexity.

Inspired by the method proposed by Zhang et al. [13], we propose in this paper an improved approach to construct 3D scale invariants of Tchebichef moments. Instead of normalizing by lower order moments, scale factors are eliminated by utilizing the orthogonality of coefficients. This method avoids enormous computing caused by iteration. Moreover, we propose an algorithm based on matrix multiplication to efficiently compute the 3D moments and invariants.

The remaining of this paper is organized as follows: in Section 2, Tchebichef polynomials and derivation of corresponding scale invariants are described in detail. Section 3 discusses an efficient way for computing 3D moments and invariants. Experimental results for evaluating the performance of the proposed descriptors are given in Section 4. Finally, conclusions are provided in Section 5.

2. Improved Scale Invariants of 3D Tchebichef Moments

In this section, the falling factorial is introduced to build a mutual relationship between Tchebichef polynomials and power series. In order to separate the scale factor, Tchebichef polynomials need to be transformed into power series firstly. Then, these separated power series are expressed through Tchebichef polynomials. By this way, moments of scaled image can be expressed as a linear combination of the original moments. We use the Stirling numbers instead of tedious iterations to obtain scale invariants, because the recursive procedure is an inherent deficiency of descriptors reported in [911].

2.1. Some Properties of Tchebichef Polynomials

The squared-norm of scaled Tchebichef polynomials is defined as [6] where is the Pochhammer symbol given by and is the squared-norm defined by The falling factorial is defined as [14]

Using (4), (1) can be rewritten as where Let   and be two column vectors, where the superscript T indicates the transposition. Using (5), we have where , is a lower triangular matrix whose size is . The matrix is invertible because its diagonal elements are not zero; therefore, where . is a lower triangular matrix too, and its elements are given by [15]

The falling factorial and the power series can be expanded mutually: where is the Stirling number of the first kind, satisfying the following recurrence relations: and is the Stirling number of the second kind, satisfying

The relationship between Tchebichef polynomials and power series has been established via the falling factorial, which is the fundamental of our descriptors.

2.2. Scale Invariants of 1D Tchebichef Moments

The 1D Tchebichef moment of order for an -length signal is defined as Let be a scaled version of with a scale factor ; that is, ; then the Tchebichef moment of order of has the form of From (5), (8), and (10), we can see that Substituting (15) into (14), we have

Equation (16) shows that the Tchebichef moment of the scaled signal can be expressed as a linear combination of the original ones. Based on this relationship, we derive the following theorem.

Theorem 1. For a given integer , let with . Then is invariant to image scaling.

The proof is given in the appendix.

2.3. Scale Invariants of 3D Tchebichef Moments

The th Tchebichef moment of a 3D image is defined by

Let us assume that the original image is scaled with factors , , and , along -direction, -direction, and -direction, respectively. That is, . The th moments of scaled image are given by Similarly to 1D case, we can rewrite (19) as To construct the 3D moment invariants, we need the following lemma.

Lemma 2. Let Then, one has , , and .

Proof. Using (6), (9), and (19), we have Taking into account, we have . The other two relationships can be demonstrated in a similar way.

Using (20) and Lemma 2, we can construct the scale invariants of 3D Tchebichef moments, which are described in following theorem.

Theorem 3. For given integers , , , let where , , and are defined in Lemma 2. Then, is invariant to image scale.

The proof is similar to that of Theorem 1 since the kernel function is separable, and so it is omitted here.

3. Computing 3D Moments and Invariants

To achieve the scale invariance of 3D moments, (23) implies that it needs to compute a 12-level nested loop. Since the structure of (23) is symmetric, we introduce a new algorithm based on matrix multiplication to reduce the number of loops.

3.1. Computing 3D Moments

Let us begin with discussion of the 1D case. Let be a column vector constituted by the th moments of . From (5), it is well known that

Yap et al. [7] proposed a matrix form of moments for a 2D image where is an matrix of Tchebichef moments, , , and .

Exchanging the order of summation, we rewrite (18) as By combining (24) with (25), an effective algorithm to calculate (26) can be derived as follows.

Step 1. Computing 2D images along -direction by (25), we can get temporary matrices (see Figure 8), where denotes the element of th row and th column in the th plane of -direction.

Step 2. Along -direction, we rearrange the temporary matrices obtained in Step 1 and get a matrix with size of . Then the required 3D moments are achieved after they are premultiplied by the kernel function matrices; that is,
The matrix representation is usually considered very effective in software packages such as MATLAB. But in Section 4, our experiment shows that it performs well in C++ too.

3.2. Computing 3D Invariants

Equation (23) has a similar structure to the definition of 3D moments. Exchanging the order of summation, we can rewrite (23) as where

Therefore, the algorithm presented in the previous subsection can be applied to compute the invariants. Let us denote matrices , , , , and by C, , , , and D, respectively. They are all lower triangular matrices, and (29a) can be evaluated by a matrix way; that is, , where denotes the matrix . The same procedure will be repeated in computing (29b) and (29c).

4. Experimental Results

In this section, we first evaluate the efficiency of our computing algorithm, which is compared with geometric moment invariants [16], Legendre invariants [10], and Zhu’s invariants [11]. Then we illustrate the performance of the proposed descriptors based on a 3D MRI image. Classification abilities of four invariants are tested on character sets finally.

A 3D silhouette image shown in Figure 1 is composed of 128 binary images with size of . It is used to evaluate the computational speed of the algorithm described in Section 3. Figure 2 shows the computational time required to calculate the invariants of order up to 18. It should be noted that the program is implemented in C++ on a PC with Intel Core 2 Duo P8400 2.4 GHz CPU, 4 GB RAM. Figure 2 implies that our descriptors require less computation time when the order of invariants increases, because we apply the proposed algorithm for computing the Tchebichef moments as well as the invariants. The geometric moments and corresponding invariants are calculated directly through nested loops. For Zhu’s invariants and Legendre invariants, we utilize our method to obtain their 3D moments firstly, and then nested loops are used to compute moment invariants. Since there are no nested loops in our algorithm, the required time increases little with the order growing.

The Volume Library [16] contains regular volume data mainly coming from CT or MRI scanners. Figure 3 shows a 3D MRI head image selected from this library. We resize this head image with the factors of 0.5, 1, 1.5, and 2 along -axis, -axis, and -axis, respectively, which form a test set consisting of 64 images with different sizes. In order to measure “invariant” performance of the descriptors, we also adopt the deviation σ/μ, which was proposed by Chong et al. [9]. Here σ and μ denote the standard deviation and mean of invariants with the same order, respectively. Figure 4 illustrates the standard deviation of the geometric invariants, Legendre invariants, Zhu’s invariants, and the proposed descriptors with the same order. It shows that the geometric invariants have the worst performance. Since high order of geometric moments has large variation in the dynamic range of values [6], this leads to unstable relative errors. Our descriptors have a slightly lower relative error than Zhu’s invariants and Legendre invariants.

Moments of higher orders are generally considered more sensitive to image noise. To test the robustness of our invariants for degraded images, we add Gaussian noise (with mean and variance σ varying from 0.01 to 0.3) and salt-and-pepper noise (with different noise densities from 0.01 to 0.3) to Figure 1, respectively. We rearrange the scale invariants of an image following the scheme through the path and so on, to construct an invariant vector , where is the maximum order of moment invariants. In this experiment, is chosen; that is, the length of equals 35.

The relative error of the invariant vector corresponding to the original image and the degraded image is defined as where is the Euclidean norm.

Relative errors caused by Gaussian and salt-and-pepper noise are depicted in Figures 5 and 6, respectively. We can observe that the relative error increases when increasing the noise level, and the proposed descriptors are more robust to noise than the other three competitors.

In the last experiment, we test the classification ability of our descriptors in both noise-free and noisy conditions. A classifier is required to identify the class of an unknown input object. We utilize the second and third order of invariants to form a feature vector. Therefore, the length of the vector equals 16. During the classification, the feature of an unknown object is compared with the training feature of a particular class. The Euclidean distance is frequently utilized as the classification measure, which is defined by where and are feature vectors of the unknown sample and the class, respectively. We define the classification rule such that an unknown input object will belong to the nearest class. The average classification accuracy is defined as

An original set of alphanumeric characters with size of shown in Figure 7 is used in this experiment. The reason for such a choice is that the elements in subsets , , and may be confused due to the similarity. Every element is scaled with the factors {0.5, 1, 1.5, and 2} along -, -, and -axes, respectively, forming a testing set including 11 classes and 704 images. Additive salt-and-pepper noise with different noise densities 0.01, 0.02, 0.03, and 0.04 is added to the test set. The feature vectors based on the proposed invariants, geometric invariants, Legendre invariants, and Zhu’s invariants are used to classify these images. The comparison result is listed in Table 1. We can see that there are few differences among the four descriptors in noise-free case. However, with the increase of noise density, our method is more robust than the other three.

5. Conclusions

We have presented a new method to derive the scale invariance of 3D Tchebichef moments. To reduce the computation time, we have proposed an efficient algorithm based on matrix multiplication for computing both 3D moments and 3D invariants. Experimental results show that our method has better classification ability, it is more robust to noise than the existing moment-based methods, and it is very effective.

Appendix

Proof of Theorem 1

Taking into account, we have Exchanging the order of summation, we can rewrite (A.1) as Since where is the Kronecker symbol.

Substitution of (A.3) into (A.2) yields Again, exchanging the order of and , we have Using , we obtain

Conflict of Interest

The authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence their work. There is no professional or other personal interest of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in or the review of this paper.

Acknowledgments

This work was supported by the National Basic Research Program of China under Grant 2011CB707904; the National Natural Science Foundation of China under Grants 61073138, 61271312, and 61201344; the Ministry of Education of China under Grant 20110092110023; the Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education; the Centre de Recherche en Information Médicale Sino-français (CRIBs). Thanks are also extended to the anonymous reviewers for their useful comments which help to improve the quality of the paper.