Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 278945 | 7 pages | https://doi.org/10.1155/2014/278945

Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

Academic Editor: Victoria Vampa
Received14 Mar 2014
Accepted19 Jun 2014
Published07 Jul 2014

Abstract

We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

1. Introduction

Fusion of multisource images that came from different modalities is very useful for obtaining a better understanding of the environmental conditions, for example, the fusion of multifocus images, the infrared (IR) images and visible images, the medical CT images and MRI images, and the multispectrum images and panchromatic images. Nowadays multiresolution based fusion approaches have been one of the popular techniques that is investigated by many researches and proves to present state-of-the-art result [14], including pyramid-based methods and discrete wavelet transform- (DWT-) based methods. In recent years, a new developed compressive sensing (CS) [58] theory is introduced into image fusion. It is well known that the compressive sensing theory provides a possible way of recovering sparse signals from their projection onto a small number of random vectors, so compressive sensing indicated a possible way of recovering high-resolution signals from their low-resolution version.

Assume that a signal is compressible under a dictionary , where is the number of nonzero components of . The main idea of CS is to recover the original signal from its compressive measurements , where . Under the condition that the matrix satisfies the restricted isometry property (RIP), the signal can be accurately recovered from only measurements [5], by solving such an optimization problem, Therefore, there are many advantages of combining the CS technique and image fusion application [914].

Nowadays the applications of compressive sensing technology into image processing can be classified into three categories: compressive sensing based imaging [1521], compressive sensing based image processing [2226], and “compressive sensing” form applications [2729]. Imaging is one of the most successful applications of compressive sensing theory, where a few sensors or low-resolution sensors are employed to achieve high-resolution imaging, such as optical imaging [16, 17], medical imaging [18, 19], and hyperspectral imaging [20, 21]. Compressive sensing is also used to transform images to other spaces to obtain more efficient analysis, such as the texture classification [22] and super-resolution image construction [23]. Numerous works are of the “compressive sensing” form applications; that is, if the task can be reduced to the optimization problem shown in (1), these works are also called compressive sensing based applications.

In image fusion, most of the available compressive sensing based fusion schemes are of “compressive sensing” form [914, 27, 28]; that is, they did not consider the simultaneous fusion and super-resolution of multisource images. In this paper, we indicate another solution for simultaneous fusion and super-resolution of multisource images via the recent developed compressive sampling theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. A set of multiscale dictionaries are learned from some groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

The rest of this paper is organized as follows. Our proposed simultaneous fusion and super-resolution scheme of multisource images is expounded in Section 2. In Section 3, some experiments are made to compare the proposed method with other related segmentation approaches. The conclusions are finally summarized in Section 4.

2. Simultaneous Fusion and Super-Resolution Scheme of Multisource Images

In this section, the foundations of our proposed method are illustrated, including the super-resolution multisource images fusion, the super-resolution multisource images fusion via compressive sensing, and the dictionary learning algorithm used in our approach.

2.1. Super-Resolution Multisource Images Fusion

Assume that the multisource images to be fused are low-resolution images; that is, the th source images are a low-resolution version of : where is the number of source images, is the down-resolution operator, and is the measurement noise of the th source image. We aim to recover a high-resolution image from the multisource low-resolution images .

The patches based fusion is adopted in our method; that is, is processed in raster-scan order, from left to right and top to bottom, and then sequentially recovered. Let denote the th local patch vector extracted from a high-resolution fusion image at the spatial location , where denotes a rectangular windowing operator and the overlapping is allowed. Given a set of () LR patches taken from (), , we have A simple example of the matrix is as follows: Our aim is to reconstruct the fusion image from the high-resolution patches () from a set of the corresponding () HR patches (). This is a simultaneous fusion and super-resolution problem of multisource images.

2.2. Super-Resolution Multisource Images Fusion via Compressive Sensing

According to the recent developed compressive sampling theory [5, 6], it is capable of recovering (; ) from under the sparsity prior of ; that is, can be represented as a sparse linear combination by an overcomplete dictionary that is not coherent with the measurement (or sampling) matrix ; that is, Here the “sparsity” of the decomposition coefficient means , and is the number of elements (or atoms) in the dictionary . Under this sparsity assumption, can thus be reconstructed by taking only measurements. As soon as the sparse coefficient is determined by estimation of can be obtained using (5).

In our method, a linear fusion rule is performed on the with each . Considering the patch by patch processing pattern, we write the low-resolution fusion patch as Therefore, its high-resolution version can be written as In our proposed method, we determine the weights according to the following formula: Because patches are highly redundant and the recovery of from becomes an overdetermined system, it is straightforward to obtain the following least-square solution in the patch aggregation:

2.3. Dictionary Learning Algorithm

The compressibility of patches shown in (5) is the sparsity prior used in our method. In order to generate several overcomplete dictionaries () that can represent well the underlying HR patches, we propose an algorithm to adaptively tune the dictionary from a set of High-Resolution multisource sample image’s patches. In this section, we will reduce the learning of dictionary   () as another sparsity-oriented optimization problem. Recent research on image statistics suggests that image patches can be well represented as a sparse linear combination of elements from an appropriately chosen overcomplete dictionary. Under this assumption, the HR image patches set () sampled from some training HR images can be represented as a sparse linear combination in a dictionary () (); that is, with the sparse coefficient vectors and . The objective of designing is to make the reconstruction error over () be minimal under the sparsity assumption; that is, We reformulate (12) as follows: where is the coefficients matrix. In order to solve it, the KSVD dictionary learning algorithm is used to train the dictionaries () from () [30, 31].

3. Experiment Results

For evaluating the performance of the proposed fusion algorithm, in this section we have implemented them on some multisource images, including the multifocus images, infrared (IR) images, and visual images, as shown in Figure 1. The size of all images used in the test is 256 lines × 256 columns and we aim to recover the 512 lines × 512 columns images. We compare our method with the following two related methods.

Method (1). Consider the multimodal image fusion with joint sparsity model [32].

Method (2). Consider the image features extraction and fusion based on joint sparse representation [33].

For evaluating the performance of the proposed algorithm, the computed results are compared by visual quality subjectively and by some guidelines in fusion. The simulations are conducted in MATLAB R2009 on PC with Intel Core 2/1.8 G/1 G.

The fusion results of three methods are shown in Figures 2, 3, 4, and 5, and from left to right are Method 1, our method, and Method 2. From them we can see that the fusion images of our method can get higher-resolution images at the same time of fusing the multisource images. Compared with the rules in [32, 33], our proposed method has better preservation of directional information. The numerical guidelines are shown in Table 1, where some measures including the entropy , mutual information (MI), average gradient (AG), standard deviation (SD), cross entropy , universal image quality index (UIQI) [34], and correlation coefficient (CC) are calculated from the fusion images derived by different methods.


ImagesMeasuresOur methodMethod 1 [17]Method 2 [18]

Figures 1(a) and 1(b) 7.30267.29967.3012
MI6.78996.77416.7560
AG2.91502.81832.8250
SD10.643810.660710.6643
0.34380.34280.3432
UIQI0.98320.99010.9943
CC0.98810.98840.9882

Figures 1(c) and 1(d) 6.14406.46476.5612
MI5.03463.65463.9025
AG7.30646.11647.1345
SD8.77858.72678.9131
6.00680.58816.9546
UIQI0.54460.53280.5421
CC0.65470.64590.6425

Figures 1(e) and 1(f) 7.69707.52437.6133
MI7.32045.76594.3234
AG10.14237.42189.3781
SD11.324410.634411.2825
0.66220.67290.7545
UIQI0.71440.72040.7131
CC0.83950.87820.8179

Figures 1(g) and 1(h) 6.68276.23017.0025
MI2.10981.49611.5836
AG5.38663.57156.6887
SD8.54797.89168.7738
0.50970.81270.2107
UIQI0.91160.92440.9079
CC0.52420.63330.4697

Here UIQI is used to estimate the subjective vision effect, which combined the spatial correlation, wrap of mean, and variance together, and it can embody the comparability between the fused image and original images. It is defined as where is the covariance of the fused source images and and and are the standard variance and the mean of the image , respectively. From it we can see that the numerical result accords with the subjective result.

4. Conclusions

In this paper we propose a novel super-resolution multisource images fusion scheme based on compressive sensing and dictionary learning. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. A new linear weights fusion rule is proposed. A set of multiscale dictionaries are learned from several groups of high-resolution (HR) sample image’s patches, and a higher resolution fusion image can be obtained from multisource images. Some experiments are taken and the results prove its superiority to its counterparts.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This project is supported by the Natural Science Foundation of Jiangsu Province of China, under Grant no. BK20130769.

References

  1. J. Nünez, X. Otazu, O. Fors, A. Prades, V. Palà, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 3, pp. 1204–1211, 1999. View at: Publisher Site | Google Scholar
  2. B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, “Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 10, pp. 2300–2312, 2002. View at: Publisher Site | Google Scholar
  3. G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Information Fusion, vol. 4, no. 4, pp. 259–280, 2003. View at: Publisher Site | Google Scholar
  4. J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Information Fusion, vol. 8, no. 2, pp. 119–130, 2007. View at: Google Scholar
  5. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  6. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  7. Z. Xue, W. Anhong, Z. Bing, and L. Lei, “Adaptive distributed compressed video sensing,” Journal of Information Hiding and Multimedia Signal Processing, vol. 5, no. 1, pp. 98–106, 2014. View at: Google Scholar
  8. X. Sun, W. Song, Y. Lvi, and L. Tang, “A new compressed sensing algorithm design based on wavelet frame and dictionary,” Journal of Information Hiding and Multimedia Signal Processing, vol. 5, no. 2, pp. 234–241, 2014. View at: Google Scholar
  9. T. Wan, N. Canagarajah, and A. Achim, “Compressive image fusion,” in Proceeding of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 1308–1311, San Diego, Calif, USA, October 2008. View at: Publisher Site | Google Scholar
  10. T. Wan and Z. Qin, “An application of compressive sensing for image fusion,” International Journal of Computer Mathematics, vol. 88, no. 18, pp. 3915–3930, 2011. View at: Publisher Site | Google Scholar
  11. X. Luo, J. Zhang, J. Yang, and Q. Dai, “Image fusion in compressed sensing,” in Proceedings of the 16th IEEE International Conference on Image Processing (ICIP '09), pp. 2205–2208, November 2009. View at: Publisher Site | Google Scholar
  12. P. Boufounos, G. Kutyniok, and H. Rauhut, “Sparse recovery from combined fusion frame measurements,” IEEE Transactions on Information Theory, vol. 57, no. 6, pp. 3864–3876, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  13. A. Y. Yang, S. Maji, K. Hong, P. Yan, and S. S. Sastry, “Distributed compression and fusion of nonnegative sparse signals for multiple-view object recognition,” in Proceedings of the 12th International Conference on Information Fusion (FUSION '09), pp. 1867–1874, Seattle, Wash, USA, July 2009. View at: Google Scholar
  14. A. Divekar and O. Ersoy, “Image fusion by compressive sensing,” in Proceedings of the 17th International Conference on Geoinformatics, pp. 1–6, Fairfax, Va, USA, August 2009. View at: Publisher Site | Google Scholar
  15. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters, vol. 93, no. 12, Article ID 121105, 2008. View at: Publisher Site | Google Scholar
  16. J. Ma, “Single-Pixel remote sensing,” IEEE Geoscience and Remote Sensing Letters, vol. 6, no. 2, pp. 199–203, 2009. View at: Publisher Site | Google Scholar
  17. M. F. Duarte, M. A. Davenport, D. Takbar et al., “Single-pixel imaging via compressive sampling: Building simpler, smaller, and less-expensive digital cameras,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83–91, 2008. View at: Publisher Site | Google Scholar
  18. Y. Kim, S. S. Narayanan, and K. S. Nayak, “Accelerated three-dimensional upper airway MRI using compressed sensing,” Magnetic Resonance in Medicine, vol. 61, no. 6, pp. 1434–1440, 2009. View at: Publisher Site | Google Scholar
  19. J. Trzasko and A. Manduca, “Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization,” IEEE Transactions on Medical Imaging, vol. 28, no. 1, pp. 106–121, 2009. View at: Publisher Site | Google Scholar
  20. M. Tello Alonso, P. López-Dekker, and J. J. Mallorquí, “A novel strategy for radar imaging based on compressive sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 12, pp. 4285–4295, 2010. View at: Publisher Site | Google Scholar
  21. L. Zhang, M. Xing, C.-W. Qiu et al., “Resolution enhancement for inversed synthetic aperture radar imaging under low SNR via improved compressive sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 10, pp. 3824–3838, 2010. View at: Publisher Site | Google Scholar
  22. L. Liu and P. Fieguth, “Texture classification from random features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 574–586, 2012. View at: Publisher Site | Google Scholar
  23. S. Yang, M. Wang, Y. Sun, F. Sun, and L. Jiao, “Compressive sampling based single-image super-resolution reconstruction by dual-sparsity and non local similarity regularizer,” Pattern Recognition Letters, vol. 33, no. 9, pp. 1049–1059, 2012. View at: Publisher Site | Google Scholar
  24. W. Dai, O. Milenkovic, M. A. Sheikh, and R. G. Baraniuk, “Probe design for compressive sensing DNA microarrays,” in Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM '08), pp. 163–169, IEEE, Philadelphia, Pa, USA, November 2008. View at: Publisher Site | Google Scholar
  25. V. K. Goyal, A. K. Fletcher, and S. Rangan, “Compressive sampling and lossy compression: do random measurements provide an efficient method of representing sparse signals?” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 48–56, 2008. View at: Publisher Site | Google Scholar
  26. C. Patsakis and N. Aroukatos, “LSB and DCT steganographic detection using compressive sensing,” Journal of Information Hiding and Multimedia Signal Processing, vol. 5, no. 1, pp. 20–32, 2014. View at: Google Scholar
  27. B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, 2012. View at: Publisher Site | Google Scholar
  28. S. Li and B. Yang, “A new pan-sharpening method using a compressed sensing technique,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 2, pp. 738–746, 2011. View at: Publisher Site | Google Scholar
  29. H.-C. Huang and F.-C. Chang, “Robust image watermarking based on compressed sensing techniques,” Journal of Information Hiding and Multimedia Signal Processing, vol. 5, no. 2, pp. 275–285, 2014. View at: Google Scholar
  30. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  31. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. View at: Publisher Site | Google Scholar
  32. S. Li and H. Yin, “Multimodal image fusion with joint sparsity model,” Optical Engineering, vol. 50, no. 6, Article ID 067007, 2011. View at: Publisher Site | Google Scholar
  33. N. Yu, T. Qiu, F. Bi, and A. Wang, “Image features extraction and fusion based on joint sparse representation,” IEEE Journal on Selected Topics in Signal Processing, vol. 5, no. 5, pp. 1074–1082, 2011. View at: Publisher Site | Google Scholar
  34. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002. View at: Publisher Site | Google Scholar

Copyright © 2014 Kan Ren et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

880 Views | 658 Downloads | 4 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.