`Mathematical Problems in EngineeringVolume 2012 (2012), Article ID 690262, 11 pageshttp://dx.doi.org/10.1155/2012/690262`
Research Article

## Construction of Affine Invariant Functions in Spatial Domain

1School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
2Department of Mathematics G. Castelnuovo, University of Rome la Sapienza, Piazzale, Aldo Moro 2, 00185 Rome, Italy

Received 17 January 2012; Accepted 9 March 2012

Copyright © 2012 Jianwei Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Affine invariant functions are constructed in spatial domain. Unlike the previous affine representation functions in transform domain, these functions are constructed directly on the object contour without any transformation. To eliminate the effect of the choice of points on the contour, an affine invariant function using seven points on the contour is constructed. For objects with several separable components, a closed curve is derived to construct the affine invariant functions. Several experiments have been conducted to evaluate the performance of the proposed method. Experimental results show that the constructed affine invariant functions can be used for object classification.

#### 1. Introduction

Recognizing objects that are subjected to certain viewing transformation is important in the field of computer vision [1]. Affine transformation may be used as an approximation to viewpoint-related changes of objects [24]. Typical geometric transformation such as rotation, translation, scaling, and skewing are included in the affine transformation.

The extraction of affine invariant features plays a very important role in object recognition and has been found application in many fields such as shape recognition and retrieval [5, 6], watermarking [7], identification of aircrafts [1, 8], texture classification [9], image registration [10], and contour matching [11].

Many algorithms have been developed for affine invariant features extraction. Based on whether the features are extracted from the contour only or from the whole-shape region, the approaches can be classified into two main categories: region-based methods and contour-based methods [12]. For good overviews of the various techniques refer to [1215]. Contour-based methods provide better data reduction, and the contour usually offers more shape information than interior content [12]. A number of contour-based methods have been introduced in recent years. Affine invariant function (AIF) in these papers is usually constructed in transform domain (see [1, 8, 1620], etc.).

Due to the spatial and frequency localization property of wavelets, many wavelet-based algorithms have been developed for the extraction of affine invariant features. It is reported that these wavelet-based methods outperform Fourier descriptors [1, 8, 19]. In these methods, the object boundary is firstly analyzed by wavelet transform at different scales. The obtained approximation and detail signals are then used for the construction of AIF. The choice of the signals, the number of decomposition levels, and the wavelet functions used have all resulted in a number of different approaches. Many promising results have been reported; Alferez and Wang [21] proposed geometric and illumination invariants for object recognition depending on the details coefficients of dyadic wavelet decomposition. Tieng and Boles [19] have developed an approximation-detail AIF using one dyadic level only. Another AIF, the detail-detail representation function, was derived by Khalil and Bayoumi using a dyadic wavelet transform [1, 8]. The invariant function is computed by utilizing two, three, or four dyadic scale levels. Recently, AIF from the approximation coefficients has been developed by applying two different wavelet transforms with different wavelet basis [18]. The synthesized AIF is proposed by Lin and Fang [17] with the synthesized feature signals of the shape.

However, in all these methods, AIFs are constructed in transform domain. That is to say, the shape contour is firstly transformed by a linear operator (e.g., wavelet transform, Fourier transform, etc.). Then AIFs are constructed from the transformed contour. In this paper, we construct AIF directly by the shape contour without any transformation. Equidistant Points on the object contour are used to construct AIFs. To eliminate the effect of the choice of points on the contour, an AIF using seven points on the contour is constructed. In addition, the shape contour is not available [12] in many cases. For example, the image of Chinese character “Yang’’ as shown in Figure 3 consists of several components. AIFs can not be constructed from these objects. To address this problem, we derive a closed curve, which is called general contour (GC), from the object. GC is obtained by performing projections along lines with different polar angles. The GC derived from the affine transformed object is the same affine transformed version as that of the original object. AIFs can be constructed in spatial domain from the derived GC. Several experiments have been conducted to evaluate the performance of the proposed method. Experimental results show that the constructed affine invariant functions can be used for object classification.

The rest of the paper is organized as follows: in Section 2, some basic concepts about affine transform are introduced. AIFs are constructed in Section 3. The performance of the proposed method is evaluated experimentally in Section 4. Finally, some conclusion remarks are provided in Section 5.

#### 2. Preliminaries

##### 2.1. Affine Transformation

Consider a parametric point with parameter on the object contour. The affine transformation consists of a linear transformation and translation as follows: The above equations can be written with the following form: where the nonsingular matrix represents the scaling, rotation, skewing transformations, and the vector corresponds to the translation.

If is an affine invariant function and is the same invariant function calculated using the points under the affine transformations, then the relation between them can be formulated as where is the determination of the matrix . The exponent of the power is called the weight of the invariance. If , the function is called as absolute invariant. If , the function is called a relative invariant.

##### 2.2. Affine Invariant Parameters

To establish one-to-one relation between two contour, the object contour should be parameterized. The arc length parameter transforms linearly under any liner transformation up to the similarity transform including translation, rotation, and scaling. But, it is not a suitable parameter for the constructing affine invariant function.

There are two parameters which are liner under an affine transformation: the affine arc length [22] and the enclosed area [16]. The affine arc length is defined as follows: where , , and , are the first and the second derivatives of , with respect to the arc length parameter . Arbter et al. [16] defined the enclosed area parameter as follows: These two parameters can be made completely invariant by simply normalizing them with respect to either the total affine arc length or the enclosed area of the contour. In the discrete case, the derivatives of and can be calculated using finite difference equations. To establish one-to-one relation between two parameter, the contour should be normalized and resampled as in [19]. In the experiments of this paper, we use the enclosed area as the parameter. In the discrete case, the parameterization should be normalized and resampled. The curve normalization approach used in this paper mainly composes of the following steps [23].(i)For the discrete object contour , compute the total area of the object contour by the following formula where denotes the centroid of the object. Let the number of points on the contour after the parameterization be too. Denote that .(ii)Select the starting point on object contour as the starting point of the normalized curve. From on object contour, search a point along the contour, such that the area of each closed zone; namely, the polygon equals to .(iii)Using the same method, from point , calculate all the points , along the object contour. , along object contour.

In the experiments of this paper, the object contour or GC is normalized and resampled such that .

#### 3. Affine Invariant Object Representation

In this part, we will derive invariant function from the normalized object contours. Correlation coefficient is used to measure the similarity of two AIFs. To construct AIFs from objects with several separable components, we convert the object into a closed curve by performing projections along lines with different polar angles.

##### 3.1. AIFs Construct in Spatial Domain

Let , and be the parametric equations of two contours that differ only by an affine transformation. For simplicity, in this subsection, we assume that the starting points on both contours are identical. After normalizing and resampling, there is a one-to-one relation between and . We use the object centroid as the origin, then translation factor is eliminated. Equation (2.2) can be written in matrix form as .

Let be an arbitrary positive constant, then is a shift version of . We define the following function: where denotes determination of a matrix. As a result of normalizing and resampling, , , and , satisfy the following equation: It follows that In other words, given in (3.1) is a relative invariance function. To eliminate the factor in (3.3), needs to be normalized. We normalize as follows: where EAN denotes enclosed area of the object contour as defined in (2.6). It follows from (3.3) that given in (3.4) is an AIF. In [1, 8, 1620], the shape contour is firstly transformed by a linear operator (such as wavelet transform, Fourier transform, etc.). Then AIFs are constructed from the transformed contour. In our method, the AIF given in (3.4) is directly constructed from the shape contour without any transformation.

Figure 1(a) shows a plane object, and Figure 1(b) shows its boundary. Figure 1(c) shows the AIF defined in (3.3) associated with Figure 1(b). Figure 2(a) shows an affine transformation version of plane in Figure 1(a), and Figure 2(b) shows its boundary. Figure 2(c) shows the AIF derived from Figure 2(b). In Figures 1(c) and 2(c), is set to 32. Note that after affine transformation, the starting points of AIFs are different. We observe that Figure 2(c) is nearly a translated version of Figure 1(c).

Figure 1: (a) A plane object. (b) The boundary of plane in (a). (c) The invariant function for the boundary in (b).
Figure 2: (a) An affine transformation version of Figure 1(a). (b) The boundary of plane in (a). (c) The AIF for the boundary in (b).
Figure 3: (a) The Chinese character “Yang’’. (b) GC of Chinese in (a). (c) The AIF for GC in (b).

Experimental results show that the choice of may affect the accuracy of the object classification based on . Some choice of may result in lower accuracy while other choice of may result in higher accuracy. To eliminate the effect of the choice of , we construct AIFs that involve more points on the object contour. In experiments of this paper, we use seven equidistant partition points of the object contour: , to construct AIF as follows: Indeed, it can be shown that, for arbitrary constants: , homogeneous polynomials in terms of are also AIFs.

##### 3.2. Measurement of the Similarity between Two AIFs

We have seen from Figures 1(c) and 2(c) that affine transformation may result in a translated version of AIF. To eliminate the effect of starting point, one-dimensional Fourier transform can be applied to the obtained AIF. The invariance can be achieved by ignoring the phase in the coefficients and only keeping the magnitudes of the coefficients. This way has a lower computational complexity since that FFT is faster than shift matching [24].

In this paper, we construct AIFs in spatial domain. Therefore, to eliminate the effect of starting point, we use correlation coefficient as in [18] to measure the similarity between two AIFs. For two sequences and , the normalized cross-correlation is defined as follows: One of sequences, or is rendered periodically, then the maximum value of correlation is selected. Such an arrangement reduces the effect of the boundary starting point variation [18]. Consequently, translation invariant is achieved. Based on [2527], some other approaches can be derived to eliminate the effect of starting point.

##### 3.3. AIFs for Objects with Several Separable Components

AIFs given in (3.4) and (3.5) can be used to object contour. But, in real-life, many objects consist of several separable components (such as Chinese character “Yang’’ in Figure 3(a)). Object contours are not available for these objects. Consequently, AIFs given in Section 3.1 cannot be used to these objects. To address this problem, we convert the object into a closed curve by performing projection along lines with different polar angles (which is called central projection transformation in [28]). The obtained closed curve is called general contour (GC) in [29]. It can be proved that the GC extracted from the affine transformed object is also an affine transformed version of GC extracted from the original object. Consequently, AIFs given in Section 3.1 can be constructed based on the GC of the object. For example, Figure 3(b) shows the GC of Figure 3(a). Figure 3(c) shows the AIF derived from GC of Figure 3(a).

#### 4. Experiment

In this section, we evaluate the discriminate ability of the proposed method. In the first experiment, we examine the proposed method by using some airplane images. Object contours can be derived from these images. In the second experiment, we evaluate the discriminate ability of the proposed method by using some Chinese characters. These characters have several separable components, and contours are not available for these objects.

In the following experiments, the classification accuracy is defined as where denotes the number of correctly classified images, and denotes the total number of images applied in the test. Affine transformations are generated by the following matrix [1]: where denote the scaling, rotation transformation, respectively, and denote the skewing transformation. To each object, the affine transformations are generated by setting the parameters in (4.2) as follows: , , and, . Therefore, each image is transformed 168 times.

##### 4.1. Air Plane Image Classification

The first experiment is conducted to classify the airplane images. Seven airplane images shown in Figure 4 are used as models in this experiment. Some of these models represent different objects but with similar contours, such as model 6 and model 7. They can be easily misclassified due to their similarity. We test the effect of the choice of the constant . The contour is normalized and resampled such that . We set . To each airplane image, the affine transformations are generated by setting the parameters in (4.2) as aforementioned. Therefore, each image is transformed 168 times. That is to say, the test is repeated 1176 times. Table 1 shows the classification accuracy of different constants and that AIF is given in (3.5). It can be observed that different accuracies may be achieved with different . For example, the accuracy rates are very low for and . To eliminate the effect of the choice , AIFs involved more points that can be used for object classification. In the rest of this paper, we use AIF given in (3.5) to extract affine invariant features.

Table 1: Classification accuracies for different under different affine transformations.
Figure 4: The airplane models.
##### 4.2. The Classification of Objects with Several Separable Components

In this experiments, we extract affine invariant features from objects with several separable components. 10 Chinese characters shown in Figure 4 are used as the database. These characters are with regular script font. The size of these characters is . Each of these characters consists of several separable components. Some characters have the same structures, but the number of strokes or the shape of specific stokes may be a little different. As aforementioned, each character image is transformed 168 times. That is to say, the test is repeated 1680 times. Experiments on Chinese characters in Figure 5 and their affine transformations show that 96.25% accurate classification can be achieved by using AIF given in (3.5).

Figure 5: Test characters used in the second experiment.

#### 5. Conclusions

In this paper, we construct AIFs in spatial domain. Unlike the previous affine representation functions in transform domain, these AIFs are constructed directly on the object contour without any transformation. This technique is based upon object contours, parameterized by an affine invariant parameter, and shifting of the contour. To eliminate the effect of the choice of points on the contour, an AIF using seven points on the contour is constructed. For objects with several separable components, a closed curve is derived to construct the AIFs. Several experiments have been conducted to evaluate the performance of the proposed method.

#### Acknowledgments

This work was supported in part by the National Science Foundation under Grant 60973157, 61003209, in part by the Natural Science Foundation of Jiangsu Province Education Department under Grant 08KJB520004.

#### References

1. M. I. Khalil and M. M. Bayoumi, “A dyadic wavelet affine invariant function for 2D shape recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 10, pp. 1152–1164, 2001.
2. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962.
3. J. Flusser and T. Suk, “Pattern recognition by affine moment invariants,” Pattern Recognition, vol. 26, no. 1, pp. 167–174, 1993.
4. T. Suk and J. Flusser, “Affine moment invariants generated by graph method,” Pattern Recognition, vol. 44, no. 9, pp. 2047–2056, 2011.
5. M. R. Daliri and V. Torre, “Robust symbolic representation for shape recognition and retrieval,” Pattern Recognition, vol. 41, no. 5, pp. 1799–1815, 2008.
6. P. L. E. Ekombo, N. Ennahnahi, M. Oumsis, and M. Meknassi, “Application of affine invariant fourier descriptor to shape based image retrieval,” International Journal of Computer Science and Network Security, vol. 9, no. 7, pp. 240–247, 2009.
7. X. B. Gao, C. Deng, X. Li, and D. Tao, “Geometric distortion insensitive image watermarking in affine covariant regions,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 40, no. 3, Article ID 5378648, pp. 278–286, 2010.
8. M. I. Khalil and M. M. Bayoumi, “Affine invariants for object recognition using the wavelet transform,” Pattern Recognition Letters, vol. 23, no. 1–3, pp. 57–72, 2002.
9. G. Liu, Z. Lin, and Y. Yu, “Radon representation-based feature descriptor for texture classification,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 921–928, 2009.
10. R. Matungka, Y. F. Zheng, and R. L. Ewing, “Image registration using adaptive polar transform,” IEEE Transactions on Image Processing, vol. 18, no. 10, pp. 2340–2354, 2009.
11. Y. Wang and E. K. Teoh, “2D affine-invariant contour matching using B-Spline model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1853–1858, 2007.
12. D. Zhang and G. Lu, “Review of shape representation and description techniques,” Pattern Recognition, vol. 37, no. 1, pp. 1–19, 2004.
13. E. Rahtu, A multiscale framework for affine invariant pattern recognition and registration, Ph.D. thesis, University of OULU, Oulu, Finland, 2007.
14. R. Veltkamp and M. Hagedoorn, “State-of the art in shape matching,” Tech. Rep. UU-CS-1999, 1999.
15. I. Weiss, “Geometric invariants and object recognition,” International Journal of Computer Vision, vol. 10, no. 3, pp. 207–231, 1993.
16. K. Arbter, W. E. Snyder, H. Burkhardt, and G. Hirzinger, “Application of affine-invariant Fourier descriptors to recognition of 3-D objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 640–647, 1990.
17. W. S. Lin and C. H. Fang, “Synthesized affine invariant function for 2D shape recognition,” Pattern Recognition, vol. 40, no. 7, pp. 1921–1928, 2007.
18. I. El Rube, M. Ahmed, and M. Kamel, “Wavelet approximation-based affine invariant shape representation functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 323–327, 2006.
19. Q. M. Tieng and W. W. Boles, “Wavelet-based affine invariant representation: a tool for recognizing planar objects in 3D space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 846–857, 1997.
20. G. Tzimiropoulos, N. Mitianoudis, and T. Stathaki, “Robust recognition of planar shapes under affine transforms using principal component analysis,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 723–726, 2007.
21. R. Alferez and Y. F. Wang, “Geometric and illumination invariants for object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 505–536, 1999.
22. D. Cyganski and R. F. Vaz, “A linear signal decomposition approach to affine invariant contour identification,” SPIE: Intelligent Robots and Computer Vision X, vol. 1607, pp. 98–109, 1991.
23. M. Yang, K. Kpalma, and J. Ronsin, “Affine invariance contour desciptor based on the equal area normalization,” IAENG International Journal of Applied Mathematics, vol. 36, no. 2, 2007.
24. Y. W. Chen and C. L. Xu, “Rolling penetrate descriptor for shape-based image retrieval and object recognition,” Pattern Recognition Letters, vol. 30, no. 9, pp. 799–804, 2009.
25. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011.
26. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012.
27. W. S. Chen, P. C. Yuen, and X. Xie, “Kernel machine-based rank-lifting regularized discriminant analysis method for face recognition,” Neurocomputing, vol. 74, no. 17, pp. 2953–2960, 2011.
28. Y. Y. Tang, Y. Tao, and E. C. M. Lam, “New method for feature extraction based on fractal behavior,” Pattern Recognition, vol. 35, no. 5, pp. 1071–1081, 2002.
29. J. Yang, Z. Chen, W. S. Chen, and Y. Chen, “Robust affine invariant descriptors,” Mathematical Problems in Engineering, vol. 2011, Article ID 185303, 15 pages, 2011.