About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 928161, 12 pages
http://dx.doi.org/10.1155/2012/928161
Research Article

Cutting Affine Moment Invariants

1School of Math and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
2School of Information Science and Technology, East China Normal University, no. 500 Dong-Chuan Road, Shanghai 200241, China

Received 18 December 2011; Accepted 26 January 2012

Academic Editor: Carlo Cattani

Copyright © 2012 Jianwei Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The extraction of affine invariant features plays an important role in many fields of image processing. In this paper, the original image is transformed into new images to extract more affine invariant features. To construct new images, the original image is cut in two areas by a closed curve, which is called general contour (GC). GC is obtained by performing projections along lines with different polar angles. New image is obtained by changing gray value of pixels in inside area. The traditional affine moment invariants (AMIs) method is applied to the new image. Consequently, cutting affine moment invariants (CAMIs) are derived. Several experiments have been conducted to evaluate the proposed method. Experimental results show that CAMIs can be used in object classification tasks.

1. Introduction

The extraction of affine invariant features plays a very important role in object recognition and has been found applicable in many fields such as shape recognition and retrieval [1, 2], watermarking [3], identification of aircrafts [4, 5], texture classification [6], image registration [7], and contour matching [8].

Many algorithms have been developed for affine invariant features extraction. Based on whether the features are extracted from the contour only or from the whole shape region, the approaches can be classified into two main categories: region-based methods and contour-based methods [9]. For good overviews of the various techniques, refer to [912].

Contour-based methods [4, 5, 1318] provide better data reduction and the contour usually offers more shape information than interior content [9]. But these methods are unapplicable to objects with several separable components (like some Chinese characters).

In contrast to contour-based methods, region-based techniques take all pixels within a shape region into account to obtain the shape representation. Moment invariant methods are the most widely used techniques. The commonly used affine moment invariants (AMIs) [1921] are extensions of the classical moment invariants firstly developed by Hu [22]. Although the moment-based methods can be applicable to binary or gray-scale images with low computational demands, they would be sensitive to noise. Hence, only a few low-order moment invariants can be used and limit the ability of object classification with a large-sized database [18].

A number of new region-based methods have also been introduced, such as Ben-Arie’s frequency domain technique [23, 24], cross-weighted moment (CWM) [25], and Trace transform [26]. A novel approach called multi scale autoconvolution (MSA) was derived by Rahtu et al. [27]. These new methods give high accuracy, but usually at the expense of high complexity and computational demands [27]. It is reported in [27] that one needs and operations for computing CWM and MSA, respectively. It can be shown that some of these methods are sensitive to noise in the background. To derive robust affine invariant features, in [28], we cut the object into slices by division curves which are derived from the object based on the obtained general contour (GC). The affine invariant descriptors are constructed by summing up the gray value associated with every pixels in each slice. However, the maximum of the division quantity is hard to be determined. To cut object into small slices, the computational complexity is very large.

Recently, structure moment invariants have been introduced in [29, 30]. These invariants are very efficient in object classification tasks for gray level images or color images, but they are unapplicable to binary images. The density of binary images can not be changed only by squaring.

All in all, contour-based methods can only be used to objects with single boundary; whereas some region-based methods can achieve high accuracy but usually at the expense of high computational demands, and some region-based methods are unapplicable to binary images.

To extract affine invariant features more efficiency, we transform the original image into new images in this paper. Affine invariants are extracted from new images. In order to construct new images, the original image is cut in two areas: the inside area and the outside area. To establish correspondence between areas of an image and those of its affine transformed image, as in [28], general contour (GC) of the image is constructed by performing projection along lines with different polar angles. A nonnegative constant is added to the gray value associated with every pixel of inside area. As a result, new images are obtained. Consequently, affine invariant features can be derived from these new images. In this paper, AMIs method is applied to the obtained new images. More affine invariant features, cutting affine moment invariants (CAMIs), are extracted. Furthermore, we combine CAMIs with the original AMIs (we call the obtained affine invariants as CCAMIs). To test and evaluate the proposed method, several experiments have been conducted. Experimental results show that CAMIs and CCAMIs can be used in object classification tasks.

The rest of the paper is organized as follows: in Section 2, the GC of an image is introduced. Consequently, the image is cut in two areas by putting GC on the image. New image is formed by changing gray value of the inside area. We apply AMIs method to the new image in Section 3. The performance of the proposed method is evaluated experimentally in Section 4. Finally, some conclusion remarks are provided in Section 5.

2. The Construction of New Images

To derive affine invariant features, we construct new images by cutting the original image in two areas. New images can be obtained by changing the gray value associated with pixels in one of these areas.

2.1. GC of an Image

Suppose that an image is represented by in the plane. Firstly, the origin of the reference system is transformed to the centroid of the image. To derive general contour of an image, the Cartesian coordinate system should be converted to polar coordinate system. Hence, the shape can be represented by a function of and , namely, where , and . Take projection along lines from the centroid with different angles by computing the following integral: where .

Definition 2.1. For an angle , if is given in (2.2), then denotes a point in the plane of . Let go from 0 to , then forms a closed curve. We call this closed curve the general contour (GC) of the image.

By (2.2), a single value is correspond to an angle . Consequently, a single closed curve can be derived from any image. For an image , we denote the GC extracted from it as . Equation (2.2) is called central projection transform in [3133]. It has been used in those papers to extract rotation invariant signature by combining wavelet analysis and fractal theory. Satisfying classification rates have been achieved in the recognition of rotated English letters, Chinese characters, handwritten signatures, and so forth. As aforementioned, in [28], by employing GC, we derive division curves to cut object into slices. The affine invariant descriptors are constructed by summing up the gray value associated with every pixels in each slice. However, the maximum of the division quantity is hard to be determined. In this paper, we use GC to construct new images. Affine invariant features are extracted from these new images.

2.2. The Affine Property of GC

An affine transformation of coordinates is defined as where , and is a 2-by-2 nonsingular matrix with real entries.

Affine maps parallel lines onto parallel lines, intersecting lines into intersecting lines. Based on these facts, it can be shown that the GC extracted from the affine transformed image is also the same affine transformed version of GC extracted from the original image. In other words, if two images and are related by an affine transformation , where and are supports of and , respectively. Then and , GCs of , and are related by the same affine transformation too:

2.3. The Construction of New Images

To construct new images, we put the GC on the original image. The image is cut in two areas: the inside area (denoted as ) and the outside area (denoted as ). In Figure 1(b), we put the GC of Figure 1(a) on the image. Figure 1(c) is the inside area of the image, and Figure 1(d) is the outside area of the image.

fig1
Figure 1: (a) Chinese character “Fu’’. (b) Put the GC of Figure 1(a) on the image. (c) The inside area of Figure 1(a). (d) The outside area of Figure 1(a).

As aforementioned, GC preserves the affine transformation signature. As a result, the inside area preserves affine transformation too. If two images and are related by an affine transformation as in (2.4), then and , inside areas of and , are related by the same affine transformation too: For example, Figure 2(a) is an affine transform version of Figure 1(a). Put the GC of Figure 2(a) on the image (as shown in Figure 2(b)). Figure 2(c) is the inside area of the image of Figure 2(b). Figure 2(d) is the outside area of the image of Figure 2(b). We observe that Figures 2(c) and 2(d) are affine transformed versions of Figures 1(c) and 1(d). The affine transformation is the same as that of Figure 2(a) to Figure 1(a).

fig2
Figure 2: (a) An affine transformation version of Figure 1(a). (b) Put the GC of Figure 2(a) on the image. (c) The inside area of Figure 2(a). (d) The outside area of Figure 2(a).

Consequently, new images can be constructed by changing gray value associated with pixels in . For an image, a constant () is added to the gray value associated with every pixels in . The obtained new image is denoted as : For different , various new images can be derived. It is obvious that is the original image if .

Suppose that is an affine transformed image of the original image . is the new image constructed from by (2.7). is the new image constructed from by (2.7). Then is the same version affine transformed image of . For example, we add 0.1 to the inside area of Figure 1(a); the obtained new image is shown in Figure 3(a). The gray value of the inside area of Figure 2(a) is also added 0.1; the obtained image is shown in Figure 3(b). We observed that Figure 3(b) is the same affine transform version of Figure 3(a) as that of Figure 2(a) to Figure 1(a).

fig3
Figure 3: (a) New image constructed from Figure 1(a). (b) New image constructed from Figure 2(a).

Some well-developed methods can be applied to the derived new images. More affine invariant features can be constructed. As aforementioned, only a few low-order moment invariants can be used for object classification. We can apply AMIs method to the derived new images. More low-order moment invariants can be extracted. We construct new affine moment invariants in the next section.

3. Cutting Affine Moment Invariants

By applying various region-based methods to the derived new image, some affine invariant features can be extracted. As aforementioned, AMIs method is region-based method with low computational demands. We apply AMIs to the constructed new image.

Geometric moment of the new image is defined as where , are nonnegative integers. is the central moments: where , are the coordinates of the centroid of the image.

For two points , we denote the cross product as After an affine transform, the following equation holds: where denotes the Jacobian of affine transformation: .

For points (): , and non-negative integers , we define of the form: Denote . We normalized as follows: Using similar argument with that of affine moment invariants (see [20], etc.), it can be shown that is affine invariant. We call these invariants as cutting affine moment invariants (CAMIs). If , these invariants are the same as moment invariants given in [20].

By expanding in (3.5), becomes a polynomial of moments given in (3.2). Consequently, we can compute CAMIs from moments given in (3.2). Invariants can be derived by replacing moments in AMIs with the moments given in (3.2). Here, we use the well-developed theory for the AMIs as described in [19]. The following form invariants are used in this paper: If we set , (3.7) results in AMIs used in [19]. By changing the constant , different invariants can be constructed. Consequently, more low-order moment invariants can be extracted. We will show that the obtained CAMIs can be used in object classification. Furthermore, we will combine the obtained CAMIs with the traditional AMIs. The obtained features (we call them CCAMIs) are also used in object classification.

4. Experiments

In this section, we evaluate the proposed method in object classification tasks. We will show that the derived affine invariants (CAMIs) can be used in object classification. Furthermore, CAMIs can be combined with the original AMIs (we call the obtained affine invariants as CCAMIs). We denote AMIs used in [19] as: , , ( in (3.7)).

In the first experiment, some binary images of Chinese characters are used. The CAMIs used in this experiment are obtained by setting equal to of the maximum gray value in the image. Hence, is set to 0.1. These CAMIs are denoted as: , , . Figure 4(a) shows the original six Chinese characters. Some of these characters are very similar. Figure 4(b) shows the same set of characters deformed by affine transforms. The values of invariants AMIs: ,, and CAMIs: are given in Table 1. It can be seen clearly that CAMIs really are invariant under affine transform. Furthermore, CAMIs are different with the original AMIs.

tab1
Table 1: AMIs and CAMIs for some similar Chinese characters.
fig4
Figure 4: (a) The original six model Chinese characters. (b) Deformed Chinese characters to be recognized.

In the second experiment, we test the combined invariants (CCAMIs): , , , . Two groups of Chinese characters, shown in Figures 5(a) and 5(b), are chosen as databases. Each group include 40 Chinese characters with regular script font. The images in Figure 5(a) have size of , and those in Figure 5(b) have size of . Some characters in these databases have the same structures, but the number of stokes or the shape of specific stokes may be a little different. The affine transformations are generated by the following transformation matrix [4]: where , , , and . denote the scaling, rotation transformation, respectively, and , denote the skewing transformation.

fig5
Figure 5: (a) First group of 40 characters. (b) Second group of 40 characters.

Each character will be transformed 140 times as described above. With these affine transformations and the database, 5600 tests run using the proposed method for each group. In our experiments, the classification accuracy is defined as where denotes the number of correctly classified images, and denotes the total number of images applied in the test.

The AMIs, CAMIs, and the combined invariants CCAMIs are applied to databases in Figures 5(a) and 5(b). Classification is performed by the method used in [19]. Table 2 shows the results. For the first group of Chinese characters, we observe that the performance of CAMIs is a little better than that of AMIs, and the combined invariants CCAMIs have better performance than the original AMIs and CAMIs. For the other group of Chinese characters, we observe that the performance of the traditional AMIs is better than that of CAMIs, and the combined invariants CCAMIs have also better performance than the original AMIs and CAMIs. Hence, the original AMIs can be combined with CAMIs, more shape information may be extracted.

tab2
Table 2: Classification accuracies of AMIs, CAMIs, and CCAMIs in case of different affine transformations.

5. Conclusions

In this paper, an approach is developed for the extraction of affine invariant features by cutting image into areas: the inside area and the outside area. In order to establish correspondence between areas of an image and those of its affine transformed version, general contour (GC) of the object is employed. A nonnegative constant is added to the gray value associated with every pixel of inside area. Consequently, new image is obtained, and CAMIs are constructed from the new image. To test and evaluate the proposed method, several experiments have been conducted. Experimental results show that CAMIs can be used in object classification tasks.

Acknowledgments

This work was supported in part by the National Science Foundation under Grant 60973157, 61003209 in part by the Natural Science Foundation of Jiangsu Province Education Department under Grant 08KJB520004. Ming Li acknowledges the 973 plan under the Project no. 2011CB302802 and the NSFC under the Project Grant nos. 61070214 and 60873264.

References

  1. M. R. Daliri and V. Torre, “Robust symbolic representation for shape recognition and retrieval,” Pattern Recognition, vol. 41, no. 5, pp. 1799–1815, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. P. L. E. Ekombo, N. Ennahnahi, M. Oumsis, and M. Meknassi, “Application of affine invariant fourier descriptor to shape based image retrieval,” International Journal of Computer Science and Network Security, vol. 9, no. 7, pp. 240–247, 2009.
  3. X. Gao, C. Deng, X. Li, and D. Tao, “Geometric distortion insensitive image watermarking in affine covariant regions,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 40, no. 3, Article ID 5378648, pp. 278–286, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. M. I. Khalil and M. M. Bayoumi, “A dyadic wavelet affine invariant function for 2D shape recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 10, pp. 1152–1164, 2001. View at Publisher · View at Google Scholar · View at Scopus
  5. M. I. Khalil and M. M. Bayoumi, “Affine invariants for object recognition using the wavelet transform,” Pattern Recognition Letters, vol. 23, no. 1-3, pp. 57–72, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Liu, Z. Lin, and Y. Yu, “Radon representation-based feature descriptor for texture classification,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 921–928, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Matungka, Y. F. Zheng, and R. L. Ewing, “Image registration using adaptive polar transform,” IEEE Transactions on Image Processing, vol. 18, no. 10, pp. 2340–2354, 2009. View at Publisher · View at Google Scholar
  8. Y. Wang and E. K. Teoh, “2D affine-invariant contour matching using B-Spline model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1853–1858, 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. D. Zhang and G. Lu, “Review of shape representation and description techniques,” Pattern Recognition, vol. 37, no. 1, pp. 1–19, 2004. View at Publisher · View at Google Scholar
  10. E. Rahtu, A multiscale framework for affine invariant pattern recognition and registration, Ph.D. thesis, University of Oulu, Oulu, Finland, 2007.
  11. R. Veltkamp and M. Hagedoorn, “State-of the art in shape matching,” Tech. Rep. UU-CS-1999, 1999.
  12. I. Weiss, “Geometric invariants and object recognition,” International Journal of Computer Vision, vol. 10, no. 3, pp. 207–231, 1993. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Arbter, W. E. Snyder, H. Burkhardt, and G. Hirzinger, “Application of affine-invariant Fourier descriptors to recognition of 3-D objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 640–647, 1990. View at Publisher · View at Google Scholar · View at Scopus
  14. W. S. Lin and C. H. Fang, “Synthesized affine invariant function for 2D shape recognition,” Pattern Recognition, vol. 40, no. 7, pp. 1921–1928, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. F. Mokhtarian and S. Abbasi, “Curvature scale space for shape similarity retrieval under affine transforms,” in Computer Analysis of Images and Patterns, Lecture Notes in Computer Science, pp. 65–72, Springer-Verlag, New York, NY, USA, 1999.
  16. F. Mokhtarian and S. Abbasi, “Affine curvature scale space with affine length parametrisation,” Pattern Analysis and Applications, vol. 4, no. 1, pp. 1–8, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. I. El Rube, M. Ahmed, and M. Kamel, “Wavelet approximation-based affine invariant shape representation functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 323–327, 2006. View at Publisher · View at Google Scholar · View at Scopus
  18. Q. M. Tieng and W. W. Boles, “Wavelet-based affiine invariant representation: a tool for recognizing planar objects in 3 D space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 1287–1296, 1997. View at Publisher · View at Google Scholar
  19. J. Flusser and T. Suk, “Pattern recognition by affine moment invariants,” The Journal of the Pattern Recognition Society, vol. 26, no. 1, pp. 167–174, 1993. View at Publisher · View at Google Scholar
  20. T. Suk and J. Flusser, “Affine moment invariants generated by graph method,” Pattern Recognition, vol. 44, no. 9, pp. 2047–2056, 2011. View at Publisher · View at Google Scholar
  21. T. H. Reiss, “The revised fundamental theorem of moment invariants,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 830–834, 1991. View at Publisher · View at Google Scholar · View at Scopus
  22. M. K. Hu, “Visual pattern recognition by moment invariants,” IEEE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at Publisher · View at Google Scholar
  23. J. Ben-Arie and Z. Wang, “Pictorial recognition of objects employing affine invariance in the frequency domain,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 6, pp. 604–618, 1998. View at Scopus
  24. S. Y. Chen, J. Zhang, Q. Guan, and S. Liu, “Detection and amendment of shape distortions based on moment invariants for active shape models,” IET Image Processing, vol. 5, no. 3, pp. 273–285, 2011. View at Publisher · View at Google Scholar
  25. Z. Yang and F. S. Cohen, “Image registration and object recognition using affine invariants and convex hulls,” IEEE Transactions on Image Processing, vol. 8, no. 7, pp. 934–946, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  26. M. Petrou and A. Kadyrov, “Affine Invariant Features from the Trace Transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 30–44, 2004. View at Publisher · View at Google Scholar · View at Scopus
  27. E. Rahtu, M. Salo, and J. Heikkilä, “Affine invariant pattern recognition using multiscale autoconvolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 908–918, 2005. View at Publisher · View at Google Scholar · View at Scopus
  28. J. Yang, Z. Chen, W.-S. Chen, and Y. Chen, “Robust affine invariant descriptors,” Mathematical Problems in Engineering, vol. 2011, Article ID 185303, 15 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  29. Z. M. Li, K. P. Hou, Y. J. Liu, L. H. Diao, and H. Li, “The shape recognition based on structure moment invariants,” International Journal of Information Technology, vol. 12, no. 2, pp. 97–105, 2006.
  30. Z. Li, Y. Zhang, K. Hou, and H. Li, “3D polar-radius invariant moments and structure moment invariants,” Lecture Notes in Computer Science, vol. 3611, pp. 483–492, 2005.
  31. Y. Y. Tang, Y. Tao, and E. C. M. Lam, “New method for feature extraction based on fractal behavior,” Pattern Recognition, vol. 35, no. 5, pp. 1071–1081, 2002. View at Publisher · View at Google Scholar · View at Scopus
  32. Y. Tao, E. C. M. Lam, C. S. Huang, and Y. Y. Tang, “Information distribution of the central projection method for Chinese character recognition,” Journal of Information Science and Engineering, vol. 16, no. 1, pp. 127–139, 2000.
  33. Y. Tao, E. C. M. Lam, and Y. Y. Tang, “Feature extraction using wavelet and fractal,” Pattern Recognition Letters, vol. 22, no. 3-4, pp. 271–287, 2001. View at Publisher · View at Google Scholar