About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 179489, 9 pages
http://dx.doi.org/10.1155/2013/179489
Research Article

Estimation of Large Scalings in Images Based on Multilayer Pseudopolar Fractional Fourier Transform

1College of Math & Physics, Nanjing University of Information Science & Technology, Nanjing 210044, China
2School of Information Science and Technology, East China Normal University, No. 500 Dong-Chuan Road, Shanghai 200241, China
3Department of Computer and Information Science, University of Macau, Avenue Padre Tomás Pereira, Taipa, Macau

Received 20 January 2013; Accepted 28 April 2013

Academic Editor: Hai-lin Liu

Copyright © 2013 Zhenhong Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Accurate estimation of the Fourier transform in log-polar coordinates is a major challenge for phase-correlation based motion estimation. To acquire better image registration accuracy, a method is proposed to estimate the log-polar coordinates coefficients using multilayer pseudopolar fractional Fourier transform (MPFFT). The MPFFT approach encompasses pseudopolar and multilayer techniques and provides a grid which is geometrically similar to the log-polar grid. At low coordinates coefficients the multilayer pseudopolar grid is dense, and at high coordinates coefficients the grid is sparse. As a result, large scalings in images can be estimated, and better image registration accuracy can be achieved. Experimental results demonstrate the effectiveness of the presented method.

1. Introduction

Motion estimation is of great importance to many image processing and computer vision applications. It plays a vital role in image mosaic, image compression, video enhancement, scene representation, and so forth. [16]. Numerous image registration techniques have been developed throughout the years [1, 79]. These methods can be loosely divided into the following groups: algorithms that use image pixel values directly, for example, correlation methods [10]; algorithms that use the frequency domain, for example, Fourier-based methods [1114]; algorithms that use low-level features such as edges and corners, for example, feature-based methods [15]; and algorithms that use high-level features such as identified (parts of) objects or relations between features [16], for example, graph-theoretic methods [17]. Fourier-based methods, which take advantage of fast Fourier transform (FFT), are among the most popular.

The theory background of Fourier-based image registration is described in [18, 19]. Based on the shift property of Fourier transform, the translation can be robust-estimated by phase correlation. In order to account for rotations and scaling, the image is transformed into a uniform polar or log-polar Fourier representation. As a result, rotations and scalings are reduced to translations, which can be estimated by phase correlation again. It has been shown that Fourier-based methods are robust to noise and time varying illumination disturbances [20, 21].

In the Fourier-based method, the largest challenge is to evaluate the log-polar Fourier transform efficiently and accurately. FFT is commonly defined on the Cartesian grid. To calculate the log-polar coordinates coefficients, one might calculate the FFT coefficients on the Cartesian grid and then interpolate them to the log-polar grid (see [19] for more details). Unfortunately, such an interpolation process is not accurate enough and results in significant errors. Based on the technique of pseudopolar fractional Fourier transform (PPFFT) [11], a method is developed in [20] to estimate large translations, rotations, and scaling in images. It is demonstrated in [20] that a more accurate estimation of registration parameters can be achieved by PPFFT compared to the method given in [19]. However, the PPFFT method [20] can only recover scales up to 4. Recently, Pan et al. [21] present a novel method called multilayer fractional Fourier transform (MLFFT) for the fast and accurate polar/log-polar Fourier transforms. The advantages of MLFFT over other fast log-polar Fourier transform methods have been proven theoretically and experimentally in [21].

Though PPFFT and MLFFT have tremendous improvements in the fast polar/log-polar Fourier transform, they are still unsatisfying in motion estimation because they are not suitable for the log-polar Fourier transform. PPFFT pays more attention to the reduction of angular differences between the constructed grids and the uniform log-polar grids, and the radial differences have been ignored. On the other hand, MLFFT pays more attention to the reduction of radial differences between the constructed grids and the uniform log-polar grids, and the angular differences have been ignored.

To acquire better image registration accuracy, a method is proposed to estimate the log-polar coordinates coefficients using multilayer pseudopolar fractional Fourier transform (MPFFT). It encompasses PPFFT and MLFFT and provides a grid which is geometrically similar to the log-polar grid. The multilayer pseudopolar grid is dense at low coordinates coefficients and sparse at high coordinates coefficients. We compare the approximation accuracy of the log-polar Fourier transform using the PPFFT, MLFFT, and MPFFT. It is shown that accumulated distances of MPFFT are lower than those of PPFFT and MLFFT. As a result, large scalings in images can be estimated, and better image registration accuracy can be achieved. Experimental results demonstrate the effectiveness of the presented method.

The rest of the paper is organized as follows. Prior techniques related to FFT-based image registration are given in Section 2. In Section 3, MPFFT is described, and the accuracy of MPFFT is also analyzed. Experimental results are discussed in Section 4, and final conclusions are given in Section 5.

2. Basic Theory of Fourier Domain

2.1. Phase Correlation

Let and be two images that differ only by a displacement , that is, Their corresponding Fourier transforms and will be related by The cross-power spectrum of two images and with Fourier transforms and is defined as follows: where is the complex conjugate of ; the shift theorem guarantees that the phase of the cross-power spectrum is equivalent to the phase difference between the images. By taking inverse Fourier transform of the representation in the frequency domain, we will have a function that is an impulse; it is approximately zero everywhere except at the displacement that is needed to optimally register the two images [19].

2.2. Log-Polar Coordinates Translation

Let be a translated, rotated, and scaled replica of : where , , and are the rotation angle, scale factor, and translation parameters, respectively.

The Fourier transform of (4) in polar coordinate systems is where , . Let and be the magnitudes of and . Then, we have

Letting , (7) can be written as

Due to the fact that the Fourier spectrum is conjugate symmetric for real sequences, only two quadrants of the Fourier spectra of images are used to map them to the log-polar plane for estimation parameters. These parameters can be recovered by phase correlation using (8) [22].

Obviously, it is hard for us to calculate the discrete Fourier magnitudes of images in the log-polar coordinates. 2D FFT is used in traditional algorithms to approximate the log-polar Fourier representation of images. This will introduce large interpolation errors.

3. Multilayer Pseudopolar Fractional Fourier Transform

3.1. Fractional Fourier Transform

The proposed algorithm is based on the fractional Fourier transform. As in [20], the centered fractional Fourier transform is employed in the proposed method. Given a vector , the fractional Fourier transform is defined as follows:

When , the fractional Fourier transform is equal to the discrete Fourier transform; thus, we get the values of the frequencies that are distributed uniformly in in the frequency domain.

When , the fractional Fourier transform returns the values of the frequencies that are scattered uniformly in in the frequency domain.

In this paper, we only discuss the case of , where .

3.2. Multilayer Pseudopolar Grid

In this paper, we estimate motions of images using MPFFT. MPFFT is defined on multilayer pseudopolar grid. The set of multilayer pseudopolar grid points () is defined as follows: where , and (e.g., see Figure 1(d)). If , this grid is the same as the pseudopolar grid. can be viewed as a composite of pseudopolar grid and multilayer grid, but it is denser than them.

fig1
Figure 1: (a) ; (b) PP; (c) ML2; (d) MP2. The two-layer pseudopolar is much closer to the log-polar grid.

Let be the set of log-polar grid points, PP the set of pseudopolar grid points, and the set of -layer Cartesian grid points: where . Figure 1 shows examples of , PP, and . We note that if , this grid is the same as the Cartesian grid.

From Figure 1, we observed that the multilayer pseudopolar grid is much closer to the log-polar grid than to the pseudopolar grid and the multilayer grid.

3.3. The Interpolation Error

In this section, the interpolation errors of PPFFT, MLFFT, and MPFFT will be discussed.

For a certain image’s spectrum magnitude , the interpolation error is defined as follows [20]: where is the distance between the actual point in the frequency plane and the closest point in the grid. Because are different for different signals, we substitute with . It follows from (12) that By calculating (13) with such a constant , we are able to compare the max interpolation errors of different transform methods without considering the signal itself [21].

In the rest of this paper, we only consider two-layer MP (MP2). Other MPs can be discussed similarly.

We denote as the set of Cartesian grid points: Let be the set of the first layer Cartesian grid points in : Let be the set of the second layer Cartesian grid points in ML2. In other words, is the complement set of in ML2. We denote as the set of first-layer pseudopolar grid points in MP2: Furthermore, we denote as the set of second-layer pseudopolar grid points in MP2. In other words, is the complement set of in MP2.

The distance between a point on and the closest point on PP can be expressed as follows [20]: where in pseudopolar grid is the closest point of in log-polar grid. In pseudopolar grid it is derived in [20] that where is the angle between two neighboring polar radii, and the is the distance between two points on neighboring polar radii in pseudopolar grid. Furthermore, it is shown in [20] that where is the distance between PP and is the distance between and .

The distance between ML2 and can be expressed as In , we note that . Therefore, The points in coincide with the points in . It follows that

The distance between MP2 and can be expressed as In MP2 (see Figure 2), we notice that where is the angle between two neighboring polar radii in MP2, is the distance between two points on one polar radius in , and is the distance between two points on one polar radius in . As a consequence,

179489.fig.002
Figure 2: Geometrical properties of the MP2.

The points in coincide with the points in PP, and from (19) we reach the following conclusion: is denser than , and if we magnified the and two times, they should be the same as the and PP. Then, we obtain From (26) and (27), we get

It follows from (19), (22), (25), and (28) that the distance between the log-polar grid points and two-layer pseudopolar grid points is the shortest one. We calculate the average distances between the log-polar grid points and other grid points. The results which are listed in Table 1 demonstrate the foregoing discussion.

tab1
Table 1: Average distances between log-polar grid and other grids.

4. Experiments and Results

The proposed registration algorithm was tested using a large set of test images which were rotated, scaled, and translated.

4.1. A Framework for Image Registration with MPFFT

Let and be the target images. The algorithm operates as follows.(i)Input images and . Apply a Blackman window function to them if it is necessary, as shown in Figure 3.(ii)Choose the proper layer (). In this paper, we use the two-layer pseudopolar grid.(iii)Calculate the MPFFT magnitudes of and . We get and . Then apply a high-pass filter to them.(iv)Calculate the log-polar Fourier transform magnitude spectrums of and by bilinear interpolation on the multilayer pseudopolar grid.(v)Calculate the scale factor and the rotation angle by the phase correlation technique on the log-polar Fourier transform spectrums of both images.(vi)Apply the scale factor and the rotation angle to the reference image and use the phase correlation again to detect the translation.

fig3
Figure 3: Registration results of images. (a) Reference images. (b) Target images. (c) Registration results by PPFFT. (d) Registration results by MLFFT. (e) Registration results by MPFFT.
4.2. Registration for Arbitrary Rotation and Large Scaling Images

Registration experiments of the MPFFT algorithm were provided by comparing with PPFFT and MLFFT in this section. All the experiments were tested on the images which were rotated and scaled as shown in Figure 3. All the test images were preprocessed by a high-pass filter to reduce the scaling overlapping [23]. Some of the test images were preprocessed by a Blackman window to decrease the rotation aliasing [24]. We presented the numeric registration results for these images in Table 2. Two-layer pseudopolar fractional Fourier transform method was shown to be more accurate than pseudopolar fractional Fourier transform method and two-layer fractional Fourier transform method by testing the Cameraman image with the Blackman window, the Lena image, and the Westconcordortho image without the Blackman window.

tab2
Table 2: Registration results of Figure 3.
4.3. Registration for Noisy Images

All the experiments in this section were tested on the images which were rotated and scaled as shown in Figure 4. It is well known that Fourier-based methods are robust to noise. MPFFT algorithm was verified to be also robust to noise by experiments. The images of Figure 4(b) are target images with noise. The registration results of noisy images by PPFFT, MLFFT, and MPFFT are the images in Figures 4(c), 4(d), and 4(e). We present the numeric registration results for these images in Table 3.

tab3
Table 3: Registration results of Figure 4.
fig4
Figure 4: Registration results of noisy images. (a) Reference images. (b) Target images with noise. (c) Registration results by PPFFT. (d) Registration results by MLFFT. (e) Registration results by MPFFT.
4.4. Registration for Real Images

The images of Figures 5(a) and 5(b) were used to test the estimation of real images. In this section, MPFFT was tested on the real images which were rotated and scaled, as shown in Figure 5, similar to previous experiments. The images of Figure 5(a) are reference images, and the images of Figure 5(b) are target images. The registration results of campus images by PPFFT, MLFFT, and MPFFT are the images in Figures 5(c), 5(d), and 5(e). It was shown from the experiments that PPFFT method could not find the rotation factor and the scale factor. MLFFT method could find the scale factor but could not find the rotation factor while all the correct factors could be found by MPFFT.

fig5
Figure 5: Registration results of campus images. (a) Reference images. (b) Target images. (c) Registration results by PPFFT. (d) Registration results by MLFFT. (e) Registration results by MPFFT.

5. Conclusions

Based on PPFFT and MLFFT, a new image registration algorithm called MPFFT was presented for recovering large translations, rotations, and scaling between images. The multilayer pseudopolar grid was utilized to approximate log-polar grid to improve the accuracy of the PPFFT and MLFFT in the proposed algorithm. There are some problems that deserve further studies. For example, although our algorithm is more accurate than PPFFT and MLFFT, the computing cost is twice (layer number) as that of the PPFFT algorithm and twice as that of the MLFFT algorithm. Further works will focus on these problems.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 60973157, and Ming Li thanks the supports in part by the 973 plan under the Project Grant no. 2011CB302800, by the National Natural Science Foundation of China under the Project Grant Nos 61272402, 61070214, and 60873264.

References

  1. B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, 2003. View at Publisher · View at Google Scholar · View at Scopus
  2. H. Foroosh, J. B. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Transactions on Image Processing, vol. 11, no. 3, pp. 188–200, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. S. H. Chang, F. H. Cheng, W. H. Hsu, and G. Z. Wu, “Fast algorithm for point pattern matching: invariant to translations, rotations and scale changes,” Pattern Recognition, vol. 30, no. 2, pp. 311–320, 1997. View at Publisher · View at Google Scholar · View at Scopus
  4. G. K. Matsopoulos, S. Marshall, and J. N. H. Brunt, “Multiresolution morphological fusion of MR and CT images of the human brain,” IEE Proceedings: Vision, Image and Signal Processing, vol. 141, no. 3, pp. 137–142, 1994. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. Wang, C. Zou, Y. Yuan, H. Lu, and P. Yan, “Image registration by normalized mapping,” Neurocomputing, vol. 101, pp. 181–189, 2013. View at Publisher · View at Google Scholar
  6. J. Han, E. J. Pauwel, and P. D. Zeeuw, “Visible and infrared image registration in man-made environments employing hybrid visual features,” Pattern Recognition Letters, vol. 34, no. 1, pp. 42–51, 2013. View at Publisher · View at Google Scholar
  7. Y. Wang, J. Huang, J. Liu, and X. Tang, “An efficient image-registration method based on probability density and global parallax,” AEU—International Journal of Electronics and Communications, vol. 64, no. 12, pp. 1148–1156, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. T. J. Winkstern and N. D. Cahill, “Rapid DFT-based variational image registration with sliding boundary conditions,” in Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 429–432, 2011.
  9. M. Marinelli, V. Positano, F. Tucci, D. Neglia, and L. Landini, “Automatic PET-CT image registration method based on mutual information and genetic algorithms,” Scientific World Journal, vol. 2012, Article ID 567067, 12 pages, 2012. View at Publisher · View at Google Scholar
  10. W. K. Pratt, “Correlation techniques of image registration,” IEEE Transactions on Aerospace and Electronic Systems, vol. 10, no. 3, pp. 353–358, 1974. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Averbuch, R. R. Coifman, D. L. Donoho, M. Elad, and M. Israeli, “Fast and accurate polar Fourier transform,” Applied and Computational Harmonic Analysis, vol. 21, no. 2, pp. 145–167, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. H. Su and S. Lai, “CT-MR image registration in 3D K-space based on fourier moment matching,” in Advances in Image and Video Technology, vol. 7088 of Lecture Notes in Computer Science, pp. 299–310, 2012.
  13. N. D. Cahil, J. A. Noble, and D. J. Hawkes, “Fourier methods for nonparametric image registration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. G. Tzimropoulos, V. Argyriou, S. Zafeiriou, and T. Stathaki, “Robust FFT-based scale-invariant image registration with image gradients,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 10, pp. 1899–1906, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Krish, S. Heinrich, W. E. Snyder, H. Cakir, and S. Khorram, “Global registration of overlapping images using accumulative image features,” Pattern Recognition Letters, vol. 31, no. 2, pp. 112–118, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. L. G. Brown, “Survey of image registration techniques,” ACM Computing Surveys, vol. 24, no. 4, pp. 325–376, 1992. View at Publisher · View at Google Scholar · View at Scopus
  18. C. D. Kuglin and D. C. Hines, “The phase correlation image alignment method,” in Proceedings of the IEEE Conference on Cybernetics and Society, vol. 9, pp. 163–165, 1975.
  19. B. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,” IEEE Transactions on Image Processing, vol. 5, no. 8, pp. 1266–1271, 1996. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Keller, A. Averbuch, and M. Israeli, “Pseudopolar-based estimation of large translations, rotations, and scalings in images,” IEEE Transactions on Image Processing, vol. 14, no. 1, pp. 12–22, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  21. W. Pan, K. Qin, and Y. Chen, “An adaptable-multilayer fractional Fourier transform approach for image registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 400–413, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. H. Liu, B. Guo, and Z. Feng, “Pseudo-log-polar Fourier transform for image registration,” IEEE Signal Processing Letters, vol. 13, no. 1, pp. 17–20, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Keller, A. Avenbuch, and O. Miller, “Robust phase correlation,” in Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, pp. 740–743, 2004.
  24. H. S. Stone, B. Tao, and M. McGuire, “Analysis of image registration noise due to rotationally dependent aliasing,” Journal of Visual Communication and Image Representation, vol. 14, no. 2, pp. 114–135, 2003. View at Publisher · View at Google Scholar · View at Scopus