Multimedia Data FusionView this Special Issue
A Novel Algorithm for Satellite Images Fusion Based on Compressed Sensing and PCA
This paper studies the image fusion of high-resolution panchromatic image and low-resolution multispectral image. Based on the classic fusion algorithms on remote sensing image fusion, the PCA (principal component analysis) transform, and discrete wavelet transform, we carry out in-depth research. The compressed sensing (CS) abandons the full sample and shifts the sampling of the signal to sampling information that greatly reduces the potential consumption of traditional signal acquisition and processing. We combine compressed sensing with satellite remote sensing image fusion algorithm and propose an innovative fusion algorithm (CS-FWT-PCA), in which the symmetric fractional B-spline wavelet acts as the sparse base. In the algorithm we use Hama Da matrix as the measurement matrix and SAMP as the reconstruction algorithm and adopt an improved fusion rule based on the local variance. The simulation results show that the CS-FWT-PCA fusion algorithm achieves better fusion effect than the traditional fusion method.
Numerous interference factors are always mixed in the process of image acquisition and transmission. The images we get are mostly random. PCA , which is also known as Karhunen-Loéve transform , aims to transform random images. It conducts multidimensional orthogonal linear transformation based on image statistical characteristics. According to dimension reduction technique, it transforms multiple components into a few comprehensive components, which contain as much original variable information as possible. PCA concentrates variance, compresses data size, and shows remote sensing information of the multiband data structure more precisely, which gets a best approximation to the original image statistically. PCA, which is with wide application, is mainly focused on fusion of multi-band images. Chavez is the first person to apply PCA to multisensor image fusion. He fused the Landsat-TM multispectral and Spot Pan panchromatic images, achieving a sensational result .
Olshausen and Field  published a paper on Nature in 1966, indicating that mammalian visual cortex expresses the image features sparsely. After that, the research on image sparse modeling has attracted broad attention; excellent tools (Curvelet , Bandelet , etc.) and methods (basis pursuit (BP) , matching pursuit (MP) , etc.) of image sparse representation were proposed. The development of compressed sensing theory [9–12] is based on sparse representation. CS samples and compresses at the same time; its basic idea is to collect information which is directly related to the useful object. The obtained value is the projection projected from high to low dimensional. The main research contents of CS include measurement method of projection, reconfigurable conditions, and image reconstruction methods [13–15].
We combine compressed sensing theory into PCA and propose a kind of fusion method based on CS-FWT-PCA algorithm. We apply the proposed algorithm, the traditional PCA transform, and some improved PCA transform, respectively, in image fusion. Simulation results show that the fusion image based on CS-FWT-PCA has good spatial resolution and also efficiently keeps the spectrum feature of the original multispectral image.
2. Compression Sensing Theory of Satellite Remote Sensing Image Fusion
Candes and Tao  pioneered the conception of compression sensing in 2006. Based on signal harmonic analysis, matrix analysis, sparse representation, statistics and probability theory, time-frequency analysis, functional analysis, and optimal reconfiguration, CS develops rapidly. It aims to obtain information from signal directly and gets rid of the contact with physical measurement such as signal frequency. As long as a signal has a compressible sparse domain, its transform coefficients can be linearly projected to the low-dimensional observation vector by taking advantage of the measure matrix which is incoherent with the transformation matrix. Original signal can be precisely reconstructed from fewer sampling value by using sparse optimization theory, since the sampling value contains enough information. It is very suitable for satellite remote sensing image recovering high-resolution signal from low-resolution observations. CS theory mainly includes three parts: the sparse representation, the design of measurement matrix, and the reconstruction algorithm. And in view of the signal which can be sparse, its advantage is that it combines the traditional data acquisition with data compression and compresses data during obtaining signal. This can greatly reduce the potential consumption in traditional signal acquisition and processing.
2.1. Mathematical Model of Compressed Perception Theory
The traditional linear measurement model written in matrix form is as follows: From the signal theory, we know that -dimension signal has a linear representation by orthogonal basis ( is an -dimension vector): Expansion coefficient vector , .
Putting (2) to (1) and thinking that the CS information operator , we get The number of measurements is far less than the number of signal () on the condition of compression. From (1) we get that it is an ill-conditioned problem to recover from because the number of unknowns is greater than the number of equations; this means there exist infinite solutions. But if is a compressible sparse signal, in formula (2) is also sparse, although recovering from is also an ill-conditioned problem; the number of unknowns will be greatly reduced, making signal reconstruction possible . Signal reconstruction in compressed sensing theory is to look for optimum solution in a constraint condition. It utilizes optimization problem under norm to extract the signal. It can be formulated as From formula (4) the sparse coefficient can be estimated. Convex optimization compressing sensing recovery framework under norm is an important innovation proposed by Donoho and Candes. Its main idea is replacing the nonconvex optimization objective in formula (4) by norm:
Thus, the optimization problem in formula (4) has been turned into a solution of convex optimization problem. The result can be obtained in a way of solving linear programming problem.
In conclusion, the implementation of compressed sensing theory includes three basic elements: signals’ sparse expression, noncorrelated observation of the measurement matrix, and nonlinear optimization reconstruction of signals. Signal sparsity is the necessary condition for CS theory, measurement matrix is the key, and nonlinear optimization is an approach of CS theory to reconstruct signal . The framework of compressed sensing theory is as Figure 1.
The differences between CS theory and traditional sampling theorem  are as follows.
Firstly, traditional sampling theorem takes the infinite-length continuous signal into consideration, but CS theory concerns the vector of finite dimension.
Secondly, traditional sampling theorem obtains data by uniform sampling; by contrast, CS theory gets observed data by utilizing the inner product of signal and measurement function.
Lastly, the difference between signal reconstructions is as follows. Traditional sampling recovery uses linear interpolation of SINC function to obtain signal, but CS theory turns to solve highly nonlinear optimization problem from the current observed data to get signal.
3. CS-FWT-PCA-Based Satellite Remote Sensing Image Fusion
We apply compressed sensing theory which is combined with PCA to satellite remote sensing image fusion and choose fractional -spline wavelet as the sparse basis. The fusion rules are improved to increase the spatial resolution, enhance the spectral information, and accelerate the fusion speed in large data fusion. Fractional -spline wavelet transform is similar to traditional wavelet transform; the wavelet transform coefficients consist of a small number of large numerical coefficients and a large number of small numerical coefficients that can adequately reflect the local variation of the original image, which provides favorable conditions for image fusion.
3.1. Fractional -Spline Wavelet Transform
In 1999 for the first time Unser and Blu popularized spline function to fractional order on the basis of polynomial splines by fractional -spline functions of differential structures and gave a concrete expression [19–21]. Then, we prove that the good performance of these functions can be used as the wavelet basis function to conduct the wavelet transform. Since this transform order of wavelet transform can be a fraction, it is called the fractional-order -spline wavelet transform.
(, real number) order symmetric fractional -spline function is defined as follows: where The order symmetric fractional -spline function has been proved to owe the characteristics of multiresolution in the literature [19–21] by Unser and Blu, which can construct the wavelet basis function and satisfy the following two-scale equation: Symmetric fractional -spline function is one of Riesz bases; it can become orthonormal basis through orthogonalization and standardization. It can be used as a sparse basis to conduct sparse transformation about signals.
Fractional -spline function satisfies the following conditions : The experiment selects because the wavelet transform is carried out in the space. With symmetry fractional -spline wavelets of the space, the orthogonal filter bank can be constructed to get the corresponding symmetric fractional -spline wavelet transformation.
3.2. Determining the Fusion Rules
This paper presents an improved fusion rule: registering images before PCA transform to the MS image. Then, we select the -layers symmetrical fractional -spline wavelet to conduct sparse transform of the matched PAN image and the first principal of the PCA transformed MS image. After sparse transform, each layer can be decomposed into a low-frequency sparse matrix and a series of high-frequency sparse matrixes. We fuse coefficients separately because the high- and low-frequency sparse coefficients have different characteristics. The low-frequency sparse coefficients represent the approximate image, and the variation of the coefficients is not obvious, so the ordinary weighted average fusion  is used in the low-frequency subpicture, but high-frequency coefficients are obviously different and have significant details of the original image, such as bright lines and edges; in order to obtain better fusion effect, the fusion rules based on regional characteristics selection  are adopted. The fusion method which selects different fusion strategies adaptively according to the different areas of the image (high-frequency subpicture and the low-frequency subpicture) can improve the quality of the image fusion more effectively.
3.3. CS-FWT-PCA-Based Satellite Remote Sensing Image Fusion Algorithm
The flow chart of the satellite remote sensing image fusion algorithm based on compressed sensing, PCA transform, and fractional -spline wavelet transform (CS-FWT-PCA) is shown in Figure 2.
The concrete steps are as follows.
Register the PAN image with the MS image by using SURF-based algorithm registration method to get the image PAN1.
Apply PCA to the MS image to get the first principal component and other principal components ; then have a -order symmetric fractional -spline -layers wavelet decomposition to the first principal component and make it sparse to obtain high-frequency sparse matrixes and low-frequency coefficient matrix in different layers.
Perform histogram matching  of PAN1 with the first principal component of MS image obtained in step to get the enhanced PAN2 image; then have a symmetric fractional -spline -layers wavelet decomposition and sparse to obtain high-frequency sparse matrixes and low-frequency sparse matrix in different layers.
Fuse the low-frequency sparse matrixes and of different layers by using the weighted average method  to obtain a low-frequency coefficient of the fusion image.
The correlation coefficient of two sets of low-frequency subpictures is defined as wherein , respectively, represent . , respectively, represent average numbers of wavelet low-frequency coefficients of and , and the size of the images are . Fusion weight values are defined as follows: Then, the fusion low-frequency coefficient is calculated by the following formula:
Fuse the two high-frequency sparse matrixes , of each layer via the regional feature selection method  to obtain the fused high-frequency coefficients .
Determine the size of a local region sized as whose center point is . Point is at the pixel of the wavelet frequency coefficient matrix. and are the wavelet coefficients of and , respectively, at point , while is the mean of in the domain .
First, the local deviation is defined as wherein represents weighting factor and satisfies the condition The nearer the distance between and is, the greater the weighting factor is; the rule is available in getting .
Second, the matching matrix is expressed as The range of values of the points in match matrix is ; the closer to 1 indicates the higher correlation degree of the two low-frequency images.
Set the threshold of matching degree in the range of .
If , Otherwise, where
According to the formula , use the fused sparse matrixes and to get value and then obtain the fused component according to the reconstruction algorithm SAMP.
Do a -order symmetric fractional -spline -layers wavelet reconstruction to to get the new first principal component .
in the step is replaced by . Perform the PCA inverse transform on the with other principal components of the MS image to obtain fused image.
4. Experiment Result and Analysis
We simulate the proposed algorithm by using MATLAB 7.8. Two groups of experimental data are adopted: one is the Landsat-TM (MS image, resolution ratio is 30 m, pixels) multispectral image and SPOT (PAN image, resolution ratio is 10 m, pixels) panchromatic image; the other group is the IKONOS (MS image, resolution ratio is 4 m, pixels) multispectral image and IKONOS (PAN image, resolution ratio is 1 m, pixels) panchromatic image. Figure 3 illustrates the two groups of the source images.
4.1. The Analysis of Symmetry Fractional -Spline Wavelet Order
When taking the -order -layers symmetry factional -spline wavelet transformation, is set to 3. When an image is subjected to symmetry fractional -spline wavelet transformation, the effectiveness of the fusion differs with the change of . Based on , we can get the entropy (EN), average gradient (AG), correlation coefficient (CC), and the degree of distortion (DE); they are illustrated in Figure 4.
In Figure 4, the abscissa of each part is the wavelet order ; the ordinate, respectively, represents the entropy, the definition (average gradient), the correlation coefficient, and the degree of distortion of the fusion image. These figures show that when the order of the wavelet transformation of symmetry fractional -spline increases gradually, as shown in column 1 and column 3, the information entropy and correlation coefficient tend to decline after a short period of slight increase, the definition shown in column 2 increases after a short period of slight decline, and the torsion resistance in column 4 keeps the trend of increase. Image fusion aims to get comparatively large information entropy, average gradient and correlation coefficient, and a minimum torsion resistance. After multiple experiments, we set order −0.25 for Landsat-TM and SPOT images in group one and set order for IKONOS images in group two. After giving a proper value, we can get the best effectiveness of fusion image and achieve an optimal balance point between the four indexes of quality evaluation.
4.2. Comparison and Analysis with Traditional Fusion Algorithm
Four different methods are adopted, respectively, to fuse satellite remote sensing images, including traditional PCA transformation , wavelet transform (DWT) , the PCA and fractional -spline wavelet based fusion method (FWT-PCA)  (the wavelet in this conference is also fractional -spline wavelet), and the fusion method proposed in this paper (CS-FWT-PCA). The fused images are illustrated in Figures 5 and 6. The sampling rate of compressed sensing is .
Comparing the relevant images in Figures 3, 5, and 6, with the subjective visual effect, we can find that the spatial resolution of these fusion images is quite close and is higher than the resolution of the multispectral image before merging (Figures 3(a) and 3(c)). From the view of the spectral signature, the fusion method based on traditional PCA shows spectral distortion (Figures 5(a) and 6(a)). Though the fusion method based on combining fractional -spline wavelet transformation and PCA transformation (Figures 5(c) and 6(c)) in  has fair fusion effectiveness, the algorithm proposed in this paper has higher spatial resolution and richer spectral information than the fusion method in  and has distinct and visible image contour.
In this paragraph, the objective evaluation method will be used to analyse the information entropy, average gradient correlation coefficient, and torsion resistance of fusion images for each fusion method. Tables 1 and 2 evaluate the fusion performance of the experimental data of Landsat-TM and SPOT. Tables 3 and 4 evaluate the fusion performance of the experimental data of IKONOS.
From the data in Tables 1 and 2, it can be seen that the evaluation indexes of the fusion image based on FWT-PCA are superior to those of the fusion images based on DWT and PCA. The average gradient of the FWT-PCA-based image was increased 17.99% over DWT-based image; and we can see a 31.66% decrease in distortion. The fusion image based on CS-FWT-PCA has an improvement in entropy (0.0845), average gradient (0.7618), and correlation coefficient (0.0674) and also significantly reduces the degree of distortion (1.3037).
In Tables 3 and 4, although the source images are different, the simulation results are similar to the results based on Landsat-TM and SPOT images. Compared with PCA algorithm, the DWT algorithm increases the average gradient and correlation coefficient and decreases the entropy and the degree of distortion. the differences between two algorithms are not obvious; each index of the FWT-PCA-based fused image is better than that of PCA-based and DWT-based images. While the four indicators of the CS-FWT-PCA-based image have been greatly optimized, the mean entropy of RGB channel is 7.8488, higher than 7.7974 in FWT-PCA-based image, the mean average gradient and mean correlation coefficient are improved to 29.6318 and 0.8879 respectively, and the mean distortion is reduced to the minimum 18.2858.
These parameters show that after the traditional PCA transform, the information entropy is minimum, the average gradient and torsion resistance are comparatively large, the correlation coefficient is minimum, and the fusion effectiveness is worse than other methods. The reason of this is that in PCA transformation, the first principle component represents the image that changes most and the image of the first principle component has more spatial details. So it has more similar correlation with panchromatic image; the fusion image obtained by this method remains more spectral information and has better comprehensive effectiveness.
With great approximation capability, symmetry fractional -spline wavelet transformation can gain better effectiveness when obtaining the detailed information of the image. Combining symmetry fractional -spline wavelet transformation and PCA transformation, FWT-PCA transformation can improve the textural features of the image by PCA transformation and thereby enhances the expression of spatial details, while it can keep the richness of the spectral information of the image by symmetry fractional -spline wavelet transformation. So it improves the definition of the fusion image, meanwhile significantly reducing the torsion resistance, and the information entropy and correlation coefficient improve significantly. The algorithm of CS-FWT-PCA which is proposed in this paper significantly reduces the sampling time by compressive sensing sampling. Symmetry fractional -spline wavelet transformation brings sparsification. Combining with PCA transformation, it achieves the highest definition of fusion image. CS-FWT-PCA-based fusion image is closest to MS image in color and has the minimum torsion resistance and maximum comprehensive index. This method maximizes the high spatial resolution of origin image and richness of spectral information, improves the fusion quality, and obtains the optimal fusion effectiveness by using fewer sampling points.
In this paper, we introduced the compressed sensing and its application and then described the image fusion algorithm based on CS-FWT-PCA. In the simulation that followed, two groups of experimental data are fused separately by using the proposed algorithm, the classical PCA fusion method, the wavelet transform, and FWT-PCA fusion rules. A conclusion can be drawn that the FWT-PCA and CS-FWT-PCA algorithms are obviously superior to others, and the effect of the CS-FWT-PCA algorithm is optimal. But the compressed sensing-based algorithm requires too much time in simulation. Our next job can be focused on improving the image fusion efficiency of the proposed algorithm and reducing the simulation time.
J. Shlens, A Tutorial on Principal Component Analysis, Systems Neurobiology Laboratory, University of California at San Diego, 2005.
P. S. Chavez Jr., S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photogrammetric Engineering & Remote Sensing, vol. 57, no. 3, pp. 295–303, 1991.View at: Google Scholar
W. Fang, “Image processing and reconstruction based on compressed sensing,” Journal of Optoelectronics Laser, vol. 23, no. 1, pp. 196–202, 2012.View at: Google Scholar
Z. Zhu, K. Wahid, P. Babyn, D. Cooper, I. Pratt, and Y. Carter, “Improved compressed sensing-based algorithm for sparse-view CT image reconstruction,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 185750, 15 pages, 2013.View at: Google Scholar
Z. Xiong-wei, H. Jian-jun, and Z. Tao, “Compressive sensing: innovative theory in information processing field,” Journal of Military Communications Technology, vol. 32, no. 4, pp. 83–87, 2011.View at: Google Scholar
T. Blu and M. Unser, “Fractional spline wavelet transform: definition and implementation,” in Proceedings of the IEEE Interntional Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 512–515, June 2000.View at: Google Scholar
L. Ya-chun and W. Jin-gang, “Analysis on image fusion rules based on wavelet transform,” Computer Engineering and Applications, vol. 46, no. 8, pp. 180–182, 2010.View at: Google Scholar
C. Heng, Research on Pixel-Level Image Fusion and Its Key Technologies, University of Electronic Science and Technology of China, Chengdu, China, 2008.
Y. S. Juang, L. T. Ko, J. E. Chen, Y. S. Shieh, T. Y. Sung, and H. Chin Hsin, “Histogram modification and wavelet transform for high performance watermarking,” Mathematical Problems in Engineering, vol. 2012, Article ID 164869, 14 pages, 2012.View at: Publisher Site | Google Scholar | Zentralblatt MATH
L. Guo, H. Li, and Y. Bao, Image Fusion, Publishing House of Electronics Industry Press, Beijing, China, 2008.