Research Article  Open Access
Wenkao Yang, Jing Wang, Jing Guo, "A Novel Algorithm for Satellite Images Fusion Based on Compressed Sensing and PCA", Mathematical Problems in Engineering, vol. 2013, Article ID 708985, 10 pages, 2013. https://doi.org/10.1155/2013/708985
A Novel Algorithm for Satellite Images Fusion Based on Compressed Sensing and PCA
Abstract
This paper studies the image fusion of highresolution panchromatic image and lowresolution multispectral image. Based on the classic fusion algorithms on remote sensing image fusion, the PCA (principal component analysis) transform, and discrete wavelet transform, we carry out indepth research. The compressed sensing (CS) abandons the full sample and shifts the sampling of the signal to sampling information that greatly reduces the potential consumption of traditional signal acquisition and processing. We combine compressed sensing with satellite remote sensing image fusion algorithm and propose an innovative fusion algorithm (CSFWTPCA), in which the symmetric fractional Bspline wavelet acts as the sparse base. In the algorithm we use Hama Da matrix as the measurement matrix and SAMP as the reconstruction algorithm and adopt an improved fusion rule based on the local variance. The simulation results show that the CSFWTPCA fusion algorithm achieves better fusion effect than the traditional fusion method.
1. Introduction
Numerous interference factors are always mixed in the process of image acquisition and transmission. The images we get are mostly random. PCA [1], which is also known as KarhunenLoéve transform [2], aims to transform random images. It conducts multidimensional orthogonal linear transformation based on image statistical characteristics. According to dimension reduction technique, it transforms multiple components into a few comprehensive components, which contain as much original variable information as possible. PCA concentrates variance, compresses data size, and shows remote sensing information of the multiband data structure more precisely, which gets a best approximation to the original image statistically. PCA, which is with wide application, is mainly focused on fusion of multiband images. Chavez is the first person to apply PCA to multisensor image fusion. He fused the LandsatTM multispectral and Spot Pan panchromatic images, achieving a sensational result [3].
Olshausen and Field [4] published a paper on Nature in 1966, indicating that mammalian visual cortex expresses the image features sparsely. After that, the research on image sparse modeling has attracted broad attention; excellent tools (Curvelet [5], Bandelet [6], etc.) and methods (basis pursuit (BP) [7], matching pursuit (MP) [8], etc.) of image sparse representation were proposed. The development of compressed sensing theory [9–12] is based on sparse representation. CS samples and compresses at the same time; its basic idea is to collect information which is directly related to the useful object. The obtained value is the projection projected from high to low dimensional. The main research contents of CS include measurement method of projection, reconfigurable conditions, and image reconstruction methods [13–15].
We combine compressed sensing theory into PCA and propose a kind of fusion method based on CSFWTPCA algorithm. We apply the proposed algorithm, the traditional PCA transform, and some improved PCA transform, respectively, in image fusion. Simulation results show that the fusion image based on CSFWTPCA has good spatial resolution and also efficiently keeps the spectrum feature of the original multispectral image.
2. Compression Sensing Theory of Satellite Remote Sensing Image Fusion
Candes and Tao [16] pioneered the conception of compression sensing in 2006. Based on signal harmonic analysis, matrix analysis, sparse representation, statistics and probability theory, timefrequency analysis, functional analysis, and optimal reconfiguration, CS develops rapidly. It aims to obtain information from signal directly and gets rid of the contact with physical measurement such as signal frequency. As long as a signal has a compressible sparse domain, its transform coefficients can be linearly projected to the lowdimensional observation vector by taking advantage of the measure matrix which is incoherent with the transformation matrix. Original signal can be precisely reconstructed from fewer sampling value by using sparse optimization theory, since the sampling value contains enough information. It is very suitable for satellite remote sensing image recovering highresolution signal from lowresolution observations. CS theory mainly includes three parts: the sparse representation, the design of measurement matrix, and the reconstruction algorithm. And in view of the signal which can be sparse, its advantage is that it combines the traditional data acquisition with data compression and compresses data during obtaining signal. This can greatly reduce the potential consumption in traditional signal acquisition and processing.
2.1. Mathematical Model of Compressed Perception Theory
The traditional linear measurement model written in matrix form is as follows: From the signal theory, we know that dimension signal has a linear representation by orthogonal basis ( is an dimension vector): Expansion coefficient vector , .
Putting (2) to (1) and thinking that the CS information operator , we get The number of measurements is far less than the number of signal () on the condition of compression. From (1) we get that it is an illconditioned problem to recover from because the number of unknowns is greater than the number of equations; this means there exist infinite solutions. But if is a compressible sparse signal, in formula (2) is also sparse, although recovering from is also an illconditioned problem; the number of unknowns will be greatly reduced, making signal reconstruction possible [17]. Signal reconstruction in compressed sensing theory is to look for optimum solution in a constraint condition. It utilizes optimization problem under norm to extract the signal. It can be formulated as From formula (4) the sparse coefficient can be estimated. Convex optimization compressing sensing recovery framework under norm is an important innovation proposed by Donoho and Candes. Its main idea is replacing the nonconvex optimization objective in formula (4) by norm:
Thus, the optimization problem in formula (4) has been turned into a solution of convex optimization problem. The result can be obtained in a way of solving linear programming problem.
In conclusion, the implementation of compressed sensing theory includes three basic elements: signals’ sparse expression, noncorrelated observation of the measurement matrix, and nonlinear optimization reconstruction of signals. Signal sparsity is the necessary condition for CS theory, measurement matrix is the key, and nonlinear optimization is an approach of CS theory to reconstruct signal [11]. The framework of compressed sensing theory is as Figure 1.
The differences between CS theory and traditional sampling theorem [18] are as follows.
Firstly, traditional sampling theorem takes the infinitelength continuous signal into consideration, but CS theory concerns the vector of finite dimension.
Secondly, traditional sampling theorem obtains data by uniform sampling; by contrast, CS theory gets observed data by utilizing the inner product of signal and measurement function.
Lastly, the difference between signal reconstructions is as follows. Traditional sampling recovery uses linear interpolation of SINC function to obtain signal, but CS theory turns to solve highly nonlinear optimization problem from the current observed data to get signal.
3. CSFWTPCABased Satellite Remote Sensing Image Fusion
We apply compressed sensing theory which is combined with PCA to satellite remote sensing image fusion and choose fractional spline wavelet as the sparse basis. The fusion rules are improved to increase the spatial resolution, enhance the spectral information, and accelerate the fusion speed in large data fusion. Fractional spline wavelet transform is similar to traditional wavelet transform; the wavelet transform coefficients consist of a small number of large numerical coefficients and a large number of small numerical coefficients that can adequately reflect the local variation of the original image, which provides favorable conditions for image fusion.
3.1. Fractional Spline Wavelet Transform
In 1999 for the first time Unser and Blu popularized spline function to fractional order on the basis of polynomial splines by fractional spline functions of differential structures and gave a concrete expression [19–21]. Then, we prove that the good performance of these functions can be used as the wavelet basis function to conduct the wavelet transform. Since this transform order of wavelet transform can be a fraction, it is called the fractionalorder spline wavelet transform.
(, real number) order symmetric fractional spline function is defined as follows: where The order symmetric fractional spline function has been proved to owe the characteristics of multiresolution in the literature [19–21] by Unser and Blu, which can construct the wavelet basis function and satisfy the following twoscale equation: Symmetric fractional spline function is one of Riesz bases; it can become orthonormal basis through orthogonalization and standardization. It can be used as a sparse basis to conduct sparse transformation about signals.
Fractional spline function satisfies the following conditions [19]: The experiment selects because the wavelet transform is carried out in the space. With symmetry fractional spline wavelets of the space, the orthogonal filter bank can be constructed to get the corresponding symmetric fractional spline wavelet transformation.
3.2. Determining the Fusion Rules
This paper presents an improved fusion rule: registering images before PCA transform to the MS image. Then, we select the layers symmetrical fractional spline wavelet to conduct sparse transform of the matched PAN image and the first principal of the PCA transformed MS image. After sparse transform, each layer can be decomposed into a lowfrequency sparse matrix and a series of highfrequency sparse matrixes. We fuse coefficients separately because the high and lowfrequency sparse coefficients have different characteristics. The lowfrequency sparse coefficients represent the approximate image, and the variation of the coefficients is not obvious, so the ordinary weighted average fusion [22] is used in the lowfrequency subpicture, but highfrequency coefficients are obviously different and have significant details of the original image, such as bright lines and edges; in order to obtain better fusion effect, the fusion rules based on regional characteristics selection [23] are adopted. The fusion method which selects different fusion strategies adaptively according to the different areas of the image (highfrequency subpicture and the lowfrequency subpicture) can improve the quality of the image fusion more effectively.
3.3. CSFWTPCABased Satellite Remote Sensing Image Fusion Algorithm
The flow chart of the satellite remote sensing image fusion algorithm based on compressed sensing, PCA transform, and fractional spline wavelet transform (CSFWTPCA) is shown in Figure 2.
The concrete steps are as follows.
Register the PAN image with the MS image by using SURFbased algorithm registration method to get the image PAN1.
Apply PCA to the MS image to get the first principal component and other principal components ; then have a order symmetric fractional spline layers wavelet decomposition to the first principal component and make it sparse to obtain highfrequency sparse matrixes and lowfrequency coefficient matrix in different layers.
Perform histogram matching [24] of PAN1 with the first principal component of MS image obtained in step to get the enhanced PAN2 image; then have a symmetric fractional spline layers wavelet decomposition and sparse to obtain highfrequency sparse matrixes and lowfrequency sparse matrix in different layers.
Fuse the lowfrequency sparse matrixes and of different layers by using the weighted average method [22] to obtain a lowfrequency coefficient of the fusion image.
The correlation coefficient of two sets of lowfrequency subpictures is defined as wherein , respectively, represent . , respectively, represent average numbers of wavelet lowfrequency coefficients of and , and the size of the images are . Fusion weight values are defined as follows: Then, the fusion lowfrequency coefficient is calculated by the following formula:
Fuse the two highfrequency sparse matrixes , of each layer via the regional feature selection method [23] to obtain the fused highfrequency coefficients .
Determine the size of a local region sized as whose center point is . Point is at the pixel of the wavelet frequency coefficient matrix. and are the wavelet coefficients of and , respectively, at point , while is the mean of in the domain .
First, the local deviation is defined as wherein represents weighting factor and satisfies the condition The nearer the distance between and is, the greater the weighting factor is; the rule is available in getting .
Second, the matching matrix is expressed as The range of values of the points in match matrix is ; the closer to 1 indicates the higher correlation degree of the two lowfrequency images.
Set the threshold of matching degree in the range of .
If , Otherwise, where
According to the formula , use the fused sparse matrixes and to get value and then obtain the fused component according to the reconstruction algorithm SAMP.
Do a order symmetric fractional spline layers wavelet reconstruction to to get the new first principal component .
in the step is replaced by . Perform the PCA inverse transform on the with other principal components of the MS image to obtain fused image.
4. Experiment Result and Analysis
We simulate the proposed algorithm by using MATLAB 7.8. Two groups of experimental data are adopted: one is the LandsatTM (MS image, resolution ratio is 30 m, pixels) multispectral image and SPOT (PAN image, resolution ratio is 10 m, pixels) panchromatic image; the other group is the IKONOS (MS image, resolution ratio is 4 m, pixels) multispectral image and IKONOS (PAN image, resolution ratio is 1 m, pixels) panchromatic image. Figure 3 illustrates the two groups of the source images.
(a)
(b)
(c)
(d)
4.1. The Analysis of Symmetry Fractional Spline Wavelet Order
When taking the order layers symmetry factional spline wavelet transformation, is set to 3. When an image is subjected to symmetry fractional spline wavelet transformation, the effectiveness of the fusion differs with the change of . Based on [25], we can get the entropy (EN), average gradient (AG), correlation coefficient (CC), and the degree of distortion (DE); they are illustrated in Figure 4.
(a)
(b)
In Figure 4, the abscissa of each part is the wavelet order ; the ordinate, respectively, represents the entropy, the definition (average gradient), the correlation coefficient, and the degree of distortion of the fusion image. These figures show that when the order of the wavelet transformation of symmetry fractional spline increases gradually, as shown in column 1 and column 3, the information entropy and correlation coefficient tend to decline after a short period of slight increase, the definition shown in column 2 increases after a short period of slight decline, and the torsion resistance in column 4 keeps the trend of increase. Image fusion aims to get comparatively large information entropy, average gradient and correlation coefficient, and a minimum torsion resistance. After multiple experiments, we set order −0.25 for LandsatTM and SPOT images in group one and set order for IKONOS images in group two. After giving a proper value, we can get the best effectiveness of fusion image and achieve an optimal balance point between the four indexes of quality evaluation.
4.2. Comparison and Analysis with Traditional Fusion Algorithm
Four different methods are adopted, respectively, to fuse satellite remote sensing images, including traditional PCA transformation [3], wavelet transform (DWT) [26], the PCA and fractional spline wavelet based fusion method (FWTPCA) [27] (the wavelet in this conference is also fractional spline wavelet), and the fusion method proposed in this paper (CSFWTPCA). The fused images are illustrated in Figures 5 and 6. The sampling rate of compressed sensing is .
(a)
(b)
(c)
(a)
(b)
(c)
Comparing the relevant images in Figures 3, 5, and 6, with the subjective visual effect, we can find that the spatial resolution of these fusion images is quite close and is higher than the resolution of the multispectral image before merging (Figures 3(a) and 3(c)). From the view of the spectral signature, the fusion method based on traditional PCA shows spectral distortion (Figures 5(a) and 6(a)). Though the fusion method based on combining fractional spline wavelet transformation and PCA transformation (Figures 5(c) and 6(c)) in [27] has fair fusion effectiveness, the algorithm proposed in this paper has higher spatial resolution and richer spectral information than the fusion method in [27] and has distinct and visible image contour.
In this paragraph, the objective evaluation method will be used to analyse the information entropy, average gradient correlation coefficient, and torsion resistance of fusion images for each fusion method. Tables 1 and 2 evaluate the fusion performance of the experimental data of LandsatTM and SPOT. Tables 3 and 4 evaluate the fusion performance of the experimental data of IKONOS.




From the data in Tables 1 and 2, it can be seen that the evaluation indexes of the fusion image based on FWTPCA are superior to those of the fusion images based on DWT and PCA. The average gradient of the FWTPCAbased image was increased 17.99% over DWTbased image; and we can see a 31.66% decrease in distortion. The fusion image based on CSFWTPCA has an improvement in entropy (0.0845), average gradient (0.7618), and correlation coefficient (0.0674) and also significantly reduces the degree of distortion (1.3037).
In Tables 3 and 4, although the source images are different, the simulation results are similar to the results based on LandsatTM and SPOT images. Compared with PCA algorithm, the DWT algorithm increases the average gradient and correlation coefficient and decreases the entropy and the degree of distortion. the differences between two algorithms are not obvious; each index of the FWTPCAbased fused image is better than that of PCAbased and DWTbased images. While the four indicators of the CSFWTPCAbased image have been greatly optimized, the mean entropy of RGB channel is 7.8488, higher than 7.7974 in FWTPCAbased image, the mean average gradient and mean correlation coefficient are improved to 29.6318 and 0.8879 respectively, and the mean distortion is reduced to the minimum 18.2858.
These parameters show that after the traditional PCA transform, the information entropy is minimum, the average gradient and torsion resistance are comparatively large, the correlation coefficient is minimum, and the fusion effectiveness is worse than other methods. The reason of this is that in PCA transformation, the first principle component represents the image that changes most and the image of the first principle component has more spatial details. So it has more similar correlation with panchromatic image; the fusion image obtained by this method remains more spectral information and has better comprehensive effectiveness.
With great approximation capability, symmetry fractional spline wavelet transformation can gain better effectiveness when obtaining the detailed information of the image. Combining symmetry fractional spline wavelet transformation and PCA transformation, FWTPCA transformation can improve the textural features of the image by PCA transformation and thereby enhances the expression of spatial details, while it can keep the richness of the spectral information of the image by symmetry fractional spline wavelet transformation. So it improves the definition of the fusion image, meanwhile significantly reducing the torsion resistance, and the information entropy and correlation coefficient improve significantly. The algorithm of CSFWTPCA which is proposed in this paper significantly reduces the sampling time by compressive sensing sampling. Symmetry fractional spline wavelet transformation brings sparsification. Combining with PCA transformation, it achieves the highest definition of fusion image. CSFWTPCAbased fusion image is closest to MS image in color and has the minimum torsion resistance and maximum comprehensive index. This method maximizes the high spatial resolution of origin image and richness of spectral information, improves the fusion quality, and obtains the optimal fusion effectiveness by using fewer sampling points.
5. Conclusion
In this paper, we introduced the compressed sensing and its application and then described the image fusion algorithm based on CSFWTPCA. In the simulation that followed, two groups of experimental data are fused separately by using the proposed algorithm, the classical PCA fusion method, the wavelet transform, and FWTPCA fusion rules. A conclusion can be drawn that the FWTPCA and CSFWTPCA algorithms are obviously superior to others, and the effect of the CSFWTPCA algorithm is optimal. But the compressed sensingbased algorithm requires too much time in simulation. Our next job can be focused on improving the image fusion efficiency of the proposed algorithm and reducing the simulation time.
References
 J. Shlens, A Tutorial on Principal Component Analysis, Systems Neurobiology Laboratory, University of California at San Diego, 2005.
 C. Proppe, “Multiresolution analysis for stochastic finite element problems with waveletbased karhunenloève expansion,” Mathematical Problems in Engineering, vol. 2012, Article ID 215109, 15 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 P. S. Chavez Jr., S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photogrammetric Engineering & Remote Sensing, vol. 57, no. 3, pp. 295–303, 1991. View at: Google Scholar
 B. A. Olshausen and D. J. Field, “Emergence of simplecell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 6583, pp. 607–609, 1996. View at: Publisher Site  Google Scholar
 E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise ${C}^{2}$ singularities,” Communications on Pure and Applied Mathematics, vol. 57, no. 2, pp. 219–266, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 E. le Pennec and S. Mallat, “Sparse geometric image representations with bandelets,” IEEE Transactions on Image Processing, vol. 14, no. 4, pp. 423–438, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. G. Mallat and Z. Zhang, “Matching pursuits with timefrequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415, 1993. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candes and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site  Google Scholar
 R. G. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–124, 2007. View at: Publisher Site  Google Scholar
 W. Fang, “Image processing and reconstruction based on compressed sensing,” Journal of Optoelectronics Laser, vol. 23, no. 1, pp. 196–202, 2012. View at: Google Scholar
 Z. Zhu, K. Wahid, P. Babyn, D. Cooper, I. Pratt, and Y. Carter, “Improved compressed sensingbased algorithm for sparseview CT image reconstruction,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 185750, 15 pages, 2013. View at: Google Scholar
 L. Jing, H. ChongZhao, Y. XiangHua, and L. Feng, “Splitting matching pursuit method for reconstructing sparse signal in compressed sensing,” Journal of Applied Mathematics, vol. 2013, Article ID 804640, 8 pages, 2013. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. J. Candes and T. Tao, “Nearoptimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 W. Xu, J. Lin, K. Niu, and Z. He, “Performance analysis of support recovery in compressed sensing,” AEU—International Journal of Electronics and Communications, vol. 66, no. 4, pp. 294–296, 2012. View at: Publisher Site  Google Scholar
 Z. Xiongwei, H. Jianjun, and Z. Tao, “Compressive sensing: innovative theory in information processing field,” Journal of Military Communications Technology, vol. 32, no. 4, pp. 83–87, 2011. View at: Google Scholar
 M. Unser and T. Blu, “Fractional splines and wavelets,” SIAM Review, vol. 42, no. 1, pp. 43–67, 2000. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Blu and M. Unser, “Fractional spline wavelet transform: definition and implementation,” in Proceedings of the IEEE Interntional Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 512–515, June 2000. View at: Google Scholar
 M. Unser and T. Blu, “Construction of fractional spline wavelet bases,” Wavelet Applications in Signal and Image Processing VII, vol. 3813, no. 7, pp. 422–431, 1999. View at: Publisher Site  Google Scholar
 L. Yachun and W. Jingang, “Analysis on image fusion rules based on wavelet transform,” Computer Engineering and Applications, vol. 46, no. 8, pp. 180–182, 2010. View at: Google Scholar
 C. Heng, Research on PixelLevel Image Fusion and Its Key Technologies, University of Electronic Science and Technology of China, Chengdu, China, 2008.
 Y. S. Juang, L. T. Ko, J. E. Chen, Y. S. Shieh, T. Y. Sung, and H. Chin Hsin, “Histogram modification and wavelet transform for high performance watermarking,” Mathematical Problems in Engineering, vol. 2012, Article ID 164869, 14 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Guo, H. Li, and Y. Bao, Image Fusion, Publishing House of Electronics Industry Press, Beijing, China, 2008.
 J. Zhou, D. L. Civco, and J. A. Silander, “A wavelet transform method to merge Landsat TM and SPOT panchromatic data,” International Journal of Remote Sensing, vol. 19, no. 4, pp. 743–757, 1998. View at: Publisher Site  Google Scholar
 W. Yang and Y. Gong, “Multispectral and panchromatic images fusion based on PCA and fractional spline wavelet,” International Journal of Remote Sensing, vol. 33, no. 22, pp. 7060–7074, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2013 Wenkao Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.