About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 408232, 9 pages
http://dx.doi.org/10.1155/2013/408232
Research Article

A Novel Image Fusion Method Based on FRFT-NSCT

College of Electronic and Information Engineering, Hebei University, Baoding 071002, China

Received 9 November 2012; Accepted 12 January 2013

Academic Editor: József Kázmér Tar

Copyright © 2013 Peiguang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Nonsubsampled Contourlet transform (NSCT) has properties such as multiscale, localization, multidirection, and shift invariance, but only limits the signal analysis to the time frequency domain. Fractional Fourier transform (FRFT) develops the signal analysis to fractional domain, has many super performances, but is unable to attribute the signal partial characteristic. A novel image fusion algorithm based on FRFT and NSCT is proposed and demonstrated in this paper. Firstly, take FRFT on the two source images to obtain fractional domain matrices. Secondly, the NSCT is performed on the aforementioned matrices to acquire multiscale and multidirection images. Thirdly, take fusion rule for low-frequency subband coefficients and directional bandpass subband coefficients to get the fused coefficients. Finally, the fused image is obtained by performing the inverse NSCT and inverse FRFT on the combined coefficients. Three modes images and three fusion rules are demonstrated in the proposed algorithm test. The simulation results show that the proposed fusion approach is better than the methods based on NSCT at the same parameters.

1. Introduction

Image fusion is synthesizing two or more images of the same object, which come from different sensors, into a new image. The new image can describe the object more accurately and more comprehensively. Image fusion has been widely used in military, remote sensing, robot vision, medical image processing, and other areas. Along with the developing of mathematical tools and fusion rules, the image fusion methods are continually renewing. Recently, various fusion methods based on multiscale transforms (MSTs) have been proposed and some satisfactory results have been obtained (i.e., Pyramid, Wavelet, etc.). These multiscale methods can decompose the image into low-frequency subbands and high-frequency subbands, detailed and coarse features remain in the two types of subbands; and then process separately in different subbands for different demands [1, 2]. MST-based image fusion methods provide much better performance than the previous simple methods. At present the discrete wavelet transform becomes the most popular multiscale method in image fusion because of its good local characteristics at the spatial domain and frequency domain [3, 4]. However, the wavelet transform has limitations such as limited directions (only three directions, horizontal, vertical, diagonal) and nonoptimal-sparse representation of images. In order to solve these limitations, the new multiscale transforms (i.e., Curvelet, Contourlet, etc.) are introduced in image fusion [5, 6].

Do and Vetterli proposed a multidirection and multiresolution image expression method, namely, Contourlet transform in 2002 [7]. This transformation has good direction sensitivity, the anisotropy, and can catch accurately the image edge information. In comparison with the wavelet transform, Contourlet transform has stronger power of expression image geometry characteristic. Therefore the Contourlet transform suits two-dimensional image processing, such as image enhancement, image denoising, and so forth. It can obtain better effect than the wavelet transform; Miao and Wang applied the Contourlet transform in the image fusion [8]. However, because of the need for up sampler and down sampler, the Contourlet transform lacks the shift invariance, which usually causes the Gibbs effect. A. L. da Cunha et al. proposed a new Contourlet transform with the shift invariance, called Nonsubsampled Contourlet transform (NSCT) in 2006 [9]. In NSCT nonsubsample filter banks replace the up sampler and down sampler as filter banks to obtain the shift invariance. Because of these advantages such as multiscale, multi-directions, good spatial and frequency localization, and shift invariance, many image fusion methods based on NSCT have been proposed and provided with high performance [10, 11].

The fractional Fourier transform (FRFT) is a new transformation which develops the signal analysis into fractional time-frequency domain. It is a revolving operation of the signal Wigner distribution, and it revolves the Fourier transform or the angle Fourier transform. The introduction of the parameter , the order of the FRFT, strengthens the transform’s flexibility. The parameter varies from 0 to 1, and the signal is moved from time domain to frequency domain. FRFT can reflect the signal information in the time domain and the frequency domain simultaneously, so essentially it is a kind of unified time frequency transformation [1215]. FRFT retains all characteristics of Fourier transform and also has some important properties which the traditional Fourier transform does not have. FRFT has been used in the field of communication, the SAR data processing, and the image processing. Furthermore new algorithms based on FRFT, such as fractional wavelet transform (FRWT), have been employed in signal processing field [1618].

Nonsubsampled Contourlet transform (NSCT) has properties such as multiscale, localization, multi-direction, and shift invariance, which, however, only limits the signal analysis to the time frequency domain. Fractional Fourier transform (FRFT) develops the signal analysis to fractional domain and has many super performances but is unable to attribute the signal partial characteristic. Combining the merits of NSCT and FRFT to meet the high demands, a novel kind of image fusion algorithm based on FRFT-NSCT is proposed in this paper. The related theories and the flow charts of the fusion algorithm are introduced in Section 2. The experimental results and analyses for three modes images and three fusion rules are presented in Section 3 and the conclusions are given in Section 4.

2. Image Fusion Based on FRFT-NSCT

2.1. Nonsubsampled Contourlet Transform (NSCT)

The NSCT is a shift invariant version of the Contourlet transform. The Contourlet transform employs Laplacian pyramids (LPs) for multiscale decomposition and directional filter banks (DFBs) for directional decomposition. To achieve shift invariance, the NSCT is built upon nonsubsampled pyramids (NSPs) and nonsubsampled directional filter banks (NSDFBs). The NSP is a two-channel nonsubsampled filter bank and has no downsampling or upsampling, and hence it is shift invariant. The multiscale decomposition is achieved by iterating using the nonsubsampled filter banks. Such expansion has redundancy, where denotes the number of decomposition levels. For the next level, all filters are upsampled by 2 in both dimensions. The cascading of the analysis part is shown in Figure 1 [19, 20].

408232.fig.001
Figure 1: NSP framework.

The equivalent filters of a th level cascading NSP are given by

The NSDFB is a shift-invariant version of the critically sampled DFB in the Contourlet transform. The building block of a NSDFB is also a two-channel nonsubsampled filter bank. To obtain finer directional decomposition, the NSDFBs are iterated. For the next level, all filters are upsampled by a quincunx matrix given by . Figure 2 illustrates four-channel ecomposition with two-channel fan filter banks.

408232.fig.002
Figure 2: NSDFB framework.

The equivalent filter in each channel is given by

The NSCT is obtained by combining the 2D NSP and the NSDFB as shown in Figure 3. If the building blocks of NSP and the NSDFB are invertible, then the NSCT is invertible. The NSCT allows any number of directions in each scale, and then the NSCT has redundancy given by , where denotes the number of levels in the NSDFB at the th scale [20].

408232.fig.003
Figure 3: Decomposition framwork of the NSCT.

The NSCT is very suitable for image fusion because it has such important properties as multiresolution, localization, shift invariance, and multi-direction. Usually the process of image fusion based on NSCT includes the following: first, get the low-frequency and high-frequency components of all scales by using multiscale and multi-direction NSCT decomposition to the image A and image B separately, and then fuse them via different fusion rules to get the combined coefficients of the fused image, finally, the fused image can be obtained by using inverse NSCT.

2.2. Fractional Fourier Transform (FRFT)

If , its order FRFT is defined as where is FRFT kernel function as follows: where is rotating angle, is the order of the fractional Fourier transform, when , , . If , FRFT is . If , FRFT is the conventional Fourier transform. The order inverse fractional Fourier transform (IFRFT) is the FRFT with order [2124].

The data after performing FRFT contains both the time domain information and the frequency domain information. When varies from 0 to 1 the signal FRFT result varies from the input function continuous transformation to the Fourier transformation. That indicates the process characteristic of signal changing from the time domain to frequency domain. FRFT introduces into analysis processing, so it has some characteristics which the traditional Fourier transformation does not have, and develops the signal analysis scope. But from the above definition, we know FRFT is a kind of global transformation, so it is unable to give the signal local characteristics which are very important in the nonsteady signal processing.

2.3. FRFT-NSCT Fusion Method

Mendlovic et al. define the fractional wavelet transform (FRWT): performing a FRFT with the fractional order over the entire input signal and then performing the conventional wavelet decomposition. For reconstruction, one should use the conventional inverse wavelet transform and then carry out a FRFT with the fractional order of to return back to the plane of the input function [25].

According to this idea a new image fusion method based on FRFT-NSCT is proposed. First perform fractional Fourier transformation on two source images to obtain the fractional field transformation result, and then take nonsubsampled Contourlet transform (NSCT) on it to decompose to different frequency bands and directions, obtain the fused coefficients according to some fusion rules, and finally obtain the fused image through inverse nonsubsampled Contourlet transform (INSCT) and inverse fractional Fourier transform (IFRFT). The flowchart of the image fusion method based on FRFT-NSCT is illustrated in Figure 4. The final fused image qualities vary with the choice of the order of FRFT, NSCT decomposition layer, pyramidal filter, directional filter, and fusion rules.

408232.fig.004
Figure 4: Flowchart of the FRFT-NSCT image fusion.
2.4. Objective Performance Evaluation

Human visual perception can help judge the effects of fusion results. However, it is easily influenced by visual psychological factors. The effect of image fusion should be based on subjective vision and objective quantitative evaluation criteria. Some objective evaluation merits, such as entropy, average gradient, and standard deviation, and so forth, are employed to describe the information contained in the fused images [26, 27].

(1) Information entropy (IE): the IE of the image is an important index to measure the abound degree of the image information. Based on the principle of Shannon information theory, the IE of the image is defined as where is the ratio of the number of pixels with gray value equal to over the total number of the pixels. IE reflects the capacity of the information carried by images. The larger the IE is, the more information the image carries.

(2) Average gradient (AG): AG is the index to reflect the expression ability of the little detail contrast and texture variation, and the definition of the image. It can be expressed as where is the gray value of the pixel . Generally, the larger the average gradient, the clearer fusion image is.

3. Experiments and Results

3.1. Experimental Source Images

Without loss of generality, three groups of different pattern images, with the same size of 512 × 512, are employed in the following experiments.

(1) Multifocus images. Figures 5(a) and 5(b) illustrate a pair of test images with different focuses, the right focusing image and the left focusing image of two clocks.

fig5
Figure 5: Fusion results of the multifocus images.

(2) Multispectrum images. Figures 6(a) and 6(b) are the visible image and the infrared image of the same scene. In visible image, a person is very difficult to be recognized, but path, bush, square table, and stockade can be distinguished. In the infrared image, a person can be seen clearly, but other sceneries are quite hazy.

fig6
Figure 6: Fusion results of multispectrum images.

(3) Multimode medicine images. Figures 7(a) and 7(b) show two different mode medicine images, CT image and the SPECT image of the thyroid gland. The resolution of CT image is high; the imaging of the bone is very clear, but the imaging of the soft tissue lesions is poor. The SPECT image is conducive to identify the scope of the focal lesions, because its image of the soft tissue is clear. However, the SPECT image lacks the rigid bone tissue as a positioning reference.

fig7
Figure 7: Fusion results of multimodality medical images.

Supplementary information exists separately in the two images of every group; therefore image fusion is very suitable for processing these images.

3.2. Fusion Rules

In the process of image fusion, the choice of fusion rules is very important because it can influence the fusion results. The common fusion rules include weighted average, max select, gradient, regional energy and regional variance, and so forth. In this paper our purpose is to compare the FRFT-NSCT fusion method with NSCT fusion method under the same parameters condition, so three simple fusion rules are chosen in the experiment here.(1)Fusion rule 1#: both the low-frequency and the high-frequency coefficients follow the average value rule.(2)Fusion rule 2#: the low-frequency coefficients follow the average value and the high-frequency coefficients follow the largest absolute value rule. (3)Fusion rule 3#: both the low-frequency and the high-frequency coefficients follow the largest absolute value rule.

3.3. Experimental Results

In this section the proposed fusion algorithm based on FRFT-NSCT is compared with NSCT fusion algorithm. The parameters are set the same in the experiments. According to Section 2.1, the NSCT decomposition layer is chosen 3, the nonsubsampled pyramid (NSP) is adopted the pyramidal filter “maxflat,” and the nonsubsampled directional filter bank (NSDFB) is employed “dmaxflat7.” The order of the FRFT, , is 0.9. The simulation software is MATLAB V7.1. The computer configuration is Intel Core i3-2100 CPU, 3.10 GHz CPU clock speed and 2.99 GB memory.

Figures 5(c)5(h) illustrate the multifocus images fusion results based on two fusion methods and three fusion rules. Compared with the source images, all of the fused images eliminate the effects resulting from the different focuses of the camera and can make all the objects in the fused images clear. Among three fusion rules, the fusion rules 2# and 3# are better than fusion rule 1#, especially the fusion rule 2# obtains more outline information and the detail information of the source images. Observe subjectively, the difference between the fused images based on NSCT and based on FRFT-NSCT are not very obvious by the same fusion rules.

Figures 6(c)6(h) show the infrared and visible image fusion results. Compared with the source images, all of the fused images contain the visible information and infrared information simultaneously. FRFT-NSCT outperforms NSCT on the rule 1# and rule 2#. Especially on the fusion rule 2#, FRFT-NSCT attains more outline information and detail information of the source images. Comparing Figure 6(g) with Figure 6(h), we can see that on the rule 3#, NSCT is lighter than FRFT-NSCT, but FRFT-NSCT is clearer than NSCT.

Figures 7(c)7(h) illustrate the multimodality medical images fusion results. Compared with the source images, all of the fused images synthesize the bone information and the soft tissue lesions information. FRFT-NSCT gets more outline information and detail information of the source images than NSCT, especially on the fusion rule 3#. Because of the great difference of the source images background, the fused image background varies with the fusion rules.

The above is human visual perception. However, it is easily influenced by visual psychological factors. The objective evaluation criteria such as entropy, mean value and average gradient, and so forth can also judge the fusion results. Table 1 gives the quantitative fusion results of the three style images, based on the two methods and the three fusion rules.

tab1
Table 1: Evaluation criteria of NSCT-based fusion method and FRFT-NSCT-based fusion method.

Entropy can weigh image information abundance; the larger entropy is, the more image information contained in the fused image. Average gradient may indicate the distinct degree of an image. The larger the average gradient, the clearer fusion image is. The fusion time can measure the complexity of the algorithm. From Table 1, we can see that the quantitative evaluation criteria are in accordance with the visual effect in principle. Table 1 shows that the value of the entropy and average gradient of the FRFT-NSCT are larger than those of NSCT, except the value underlined. That means the fused image based on the FRFT-NSCT usually has more detail information and higher spatial resolution than that of the NSCT, because FRFT-NSCT combines the multiscale, multi-direction, and shift invariance of NSCT with the fractional domain analysis of FRFT, but the fusion time of the FRFT-NSCT is almost twice as bigger as the time of the NSCT. That indicates the complexity of the FRFT-NSCT is higher than NSCT, so the FRFT-NSCT method is more suitable for the situation that demands high precision rather than speed.

4. Conclusions

In image fusion study, the multiscale algorithm has been the main trend. In this paper, a novel fusion method based on FRFT-NSCT is proposed. The nonsubsampled Contourlet transform (NSCT) has properties such as multiscale, multi-direction, and shift invariance. The fractional Fourier transform (FRFT) develops the signal analysis into fractional domain and can reflect the signal information in the time domain and the frequency domain simultaneously. FRFT-NSCT combines the merits of the FRFT with that of the NSCT. For testing the effect of the proposed method, three groups of different pattern images are introduced as the source images, and three fusion rules are chosen. Fused images based on FRFT-NSCT can get more outline information and detail information of the source images than NSCT. The fused results demonstrate that the proposed algorithm has validity and feasibility. Further problems, such as parameter optimization, fusion rules improving, color image processing, and quick-algorithm will be discussed in the follow-up research.

Acknowledgments

This work is supported by National Natural Science Foundation of China (no. 11271106), the Plan of Scientific Research of Hebei Education Department (no. 2010218), and Open Foundation of Biomedical Multidisciplinary Research Center of Hebei University (no. BM201103).

References

  1. K. Song and J. W. Ji, “An image fusion algorithm based on the wavelet transform,” Journal of ShenYang Agricultural University, vol. 38, pp. 845–848, 2007.
  2. C. Gargour, M. Gabrea, V. Ramachandran, and J. M. Lina, “A short introduction to wavelets and their applications,” IEEE Circuits and Systems Magazine, vol. 9, no. 2, pp. 57–68, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Y. Qin, J. D. Wang, and P. Li, “Wavelet base selection and evaluation for image fusion,” Optoelectronic Technology, vol. 26, pp. 203–207, 2006.
  5. F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Information Fusion, vol. 8, no. 2, pp. 143–156, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. Q. Xiao-Bo, Y. Jing-Wen, X. Hong-Zhi, and Z. Zi-Qian, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatica Sinica, vol. 34, no. 12, pp. 1508–1514, 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. Q. Miao and B. Wang, “A novel image fusion method using contourlet transform,” IEEE Transactions on Image Processing, vol. 6, pp. 548–552, 2006.
  9. A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: Theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Kong, Y. Lei, and X. Ni, “Fusion technique for grey-scale visible light and infrared images based on non-subsampled contourlet transform and intensity-hue-saturation transform,” IET Signal Processing, vol. 5, no. 1, pp. 75–80, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Xiao-Hui and J. Li-Cheng, “Fusion algorithm for remote sensing images based on nonsubsampled contourlet transform,” Acta Automatica Sinica, vol. 34, no. 3, pp. 274–281, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. L. B. Almeida, “The fractional Fourier transform and time-frequency representations,” IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 3084–3091, 1994.
  13. R. Saxena and K. Singh, “Fractional Fourier transform: a novel tool for signal processing,” Journal of the Indian Institute of Science, vol. 85, no. 1, pp. 11–26, 2005. View at Scopus
  14. T. Alieva, M. J. Bastiaans, and M. L. Calvo, “Fractional transforms in optical information processing,” EURASIP Journal on Applied Signal Processing, vol. 2005, no. 10, pp. 1498–1519, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. R. Tao, Y. L. Li, and Y. Wang, “Short-time fractional fourier transform and its applications,” IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2568–2580, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. E. Dinç, F. Demirkaya, D. Baleanu, Y. Kadioǧlu, and E. Kadioǧlu, “New approach for simultaneous spectral analysis of a complex mixture using the fractional wavelet transform,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 4, pp. 812–818, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Shi, N. Zhang, and X. P. Liu, “A novel fractional wavelet transform and its applications,” Science China Information Sciences, vol. 55, no. 6, pp. 1270–1279, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  18. N. Taneja, B. Raman, and I. Gupta, “Selective image encryption in fractional wavelet domain,” International Journal of Electronics and Communications, vol. 65, no. 4, pp. 338–344, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. T. Lei, Z. Feng, and Z. Zong-gui, “Multi-resolutioan image fusion based on the nonsubsampled Contourlet transform,” Information and Control, vol. 37, no. 3, pp. 291–297, 2008.
  20. Z. Qiang and G. Baolong, “Fusion of multifocus images based on the nonsubsampled contourlet transform,” Acta Photonica Sinica, vol. 37, no. 4, pp. 838–843, 2008. View at Scopus
  21. C. Candan, M. Alper Kutay, and H. M. Ozaktas, “The discrete fractional fourier transform,” IEEE Transactions on Signal Processing, vol. 48, no. 5, pp. 1329–1337, 2000. View at MathSciNet · View at Scopus
  22. R. Tao, B. Deng, and Y. Wang, “Research progress of the fractional Fourier transform in signal processing,” Science in China Information Sciences, vol. 49, no. 1, pp. 1–25, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  23. J. Shi, Y. Chi, and N. Zhang, “Multichannel sampling and reconstruction of bandlimited signals in fractional Fourier domain,” IEEE Signal Processing Letters, vol. 17, no. 11, pp. 909–912, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. H. Yi, C. Y. Fan, J. G. Yang, and X. T. Huang, “Imaging and locating multiple ground moving targets based on keystone transform and FrFT for single channel SAR system,” in Proceedings of the Asia-Pacific Conference on Synthetic Aperture Radar (APSAR '09), pp. 771–774, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. D. Mendlovic, Z. Zalevsky, D. Mas, J. García, and C. Ferreira, “Fractional wavelet transform,” Applied Optics, vol. 36, no. 20, pp. 4801–4806, 1997. View at Scopus
  26. T. Guanqun, L. Dapeng, and L. Guanghua, “On image fusion based on different rules of wavelet transform,” Acta Photonica Sinica, vol. 33, no. 2, pp. 221–224, 2004.
  27. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognition Letters, vol. 28, no. 4, pp. 493–500, 2007. View at Publisher · View at Google Scholar · View at Scopus