Abstract

This paper presents a kind of image fusion method based on fuzzy integral, integrated spectral information, and 2 single factor indexes of spatial resolution in order to greatly retain spectral information and spatial resolution information in fusion of multispectral and high-resolution remote sensing images. Firstly, wavelet decomposition is carried out to two images, respectively, to obtain wavelet decomposition coefficients of the two image and keep coefficient of low frequency of multispectral image, and then optimized fusion is carried out to high frequency part of the two images based on weighting coefficient to generate new fusion image. Finally, evaluation is carried out to the image after fusion with introduction of evaluation indexes of correlation coefficient, mean value of image, standard deviation, distortion degree, information entropy, and so forth. The test results show that this method integrated multispectral information and space high-resolution information in a better way, and it is an effective fusion method of remote sensing image.

1. Introduction

The starting points for researching remote sensing image fusion at present can be divided into two categories: one category is specific to the detailed application purpose, and the other category is aimed at the integrated quality for optimizing image. The former is based on statistical theory and assorted with the understanding of ground object features on ground sample area, as well as the a priori knowledge like cognition, so as to generate fusion model with the criterion of specific application purpose and reach the purpose of distinctly identifying some ground object features. The latter mainly extract obvious feature information in image with the mathematical tools like spectral analysis, wavelet transform, and so forth, according to image imaging mechanism and its feature analysis and set up the relationship among image data feature information in different scale space, so as to form the fusion model with the purpose of optimizing image information content and comprehensive distinguishing feature. However, regardless of the remote sensing image carried out with any kind of purposes, currently adopted main method is greatly based on the fusion on pixel layer. Therefore, the process of remote sensing image fusion should be firstly realized with high accuracy geometric registration among different image pixels and then with pixel spectrum information fusion [1, 2].

Different types of remote sensing images have different spatial resolution, spectral resolution, and time phase resolution. Fusion of remote sensing information is to combine their respective resolution advantages to compensate for the deficiency of some resolution on single image. At present, the information fusion methods of remote sensing image mainly are principal component analysis, mineral or vegetation index or ratio, Tasseled Cap, Multiplication Transform, Ratio Transform, IHS based Transform, and so forth. All the above methods have the problem of partial loss of image spectral information with original resolution due to its limitation, while wavelet transform can carry out image information fusion to multiple wave bands, which not only can utilize the image spatial information of high resolution, but also can keep the maximum integrality of the image spectral information with low resolution. It is also the main purpose for studying fusion technique of current remote sensing image.

Wavelet transform has sound frequency division property in transform domain, and statistical characteristics of wavelet sparse reflected the obvious features of remote sensing image like edge, line, and domain, and so forth. Coefficient of low frequency of multispectral image is kept to carry out optimized fusion to the high frequency part of two images in accordance with weighting coefficient, so as to create new fusion image. The test carried out with remote sensing image data proved the effectiveness of this algorithm. The method for integrating the two single factors of spectral information and spatial resolution and the determination of the optimal point during the course of iterative optimization are still the problems demanding prompt solution.

This paper presented an image fusion method based on fuzzy integral specific to multispectral and high-resolution image fusion problem. This method effectively integrated two single factor indexes of spectral information and spatial resolution and carried out pixel-level optimal fusion and introduced fuzzy integral method, which can conveniently and efficiently determine the optimal weight number.

2. Fusion Method and Fusion Rules of Remote Sensing Image

2.1. Fusion Method

The detailed steps of image fusion method based on fuzzy integral are as follows [39]:(1)respectively, register high resolution image spot into three wave bands of multispectral image dmtm;(2)extract the three wave bands of multispectral image dmtm; respectively, carry out 3 layers of wavelet transform to extract its coefficient of low frequency;(3)carry out 3 layers of wavelet transform to high resolution image spot, so as to extract its coefficients of high and low frequency;(4)carry out infusion to images in various wave bands in accordance with fusion, and image after fusion can be obtained after wavelet transform;(5)rectify weighting coefficients and of data fusion in fusion rules in accordance with optimizing evaluation index of fusion image, and repeat Steps ~ ;(6)efficiently determine the optimal value of with utilization of fuzzy integral to end the optimizing process;(7)fusion image can be obtained after integrating the fusion image of the three wave bands.

2.2. Fusion Rules

The fusion rules of multispectral image dmtm and high resolution image spot after 3 layers of wavelet decomposition are as follows:(1)extract the 3rd layer of coefficient of low frequency of multispectral image;(2)determine the fusion value of high frequency coefficient according to the following equation, so as to carry out pixel-level fusion: wherein and are high frequency coefficients of dmtm and spot, respectively; and are required optimal weight coefficients; and meet ; that is, the determination of optimal weight coefficients can come down to which meets the objective function;(3)carry out the 3rd layer of wavelet inverse transform according to the 3rd coefficient of low frequency of multispectral image and the high frequency coefficient after being processed;(4)take the image value of the 3rd layer of wavelet after inverse transform as the coefficient of low frequency of the 2nd layer of wavelet transform, and the high frequency coefficient is calculated according to (1);(5)carry out the 1st layer of wavelet inverse transform according to Steps and to obtain the fusion image.

2.3. Optimizing Evaluation Index of Fusion Image and Optimal Objective Function
2.3.1. Evaluation Index of Spectral Information

Correlation degree of fusion image and multispectral image is used to identify the evaluation index of spectral information. Set be image after fusion and be multispectral image. Different Gaussian templates are selected in accordance with spatial resolution of multispectral image to carry out passivating treatment. The lower the resolution ratio of is, the larger the passivating template will be. The evaluation index for identifying spectral information is wherein is the number of pixel points in image; and are gray average of image; degree of correlation Corr() reflects similarity level of image .

2.3.2. Evaluation Index of Spatial Resolution

Corresponding correlation degree of fusion image between high-frequency component of gray scale and high-frequency component of high-resolution image is used to identify index under spatial resolution [1013]. Set be high-resolution panchromatic image. Firstly, the fusion image is transformed to grey level image and then to carry out wavelet decomposition, so as to obtain 4 components , , , of the fusion image, which, respectively, present low-frequency component of fusion image, high-frequency component in horizontal direction, high-frequency component in vertical direction, and high-frequency component in diagonal direction. In the same way, it also can obtain 4 components , , , and of wavelet decomposition of high-resolution image. Define evaluation index of spatial resolution:

3. Proposed Method

3.1. Fuzzy Integral Based Fusion

The key point of comprehensive evaluation with the utilization of fuzzy integral is the definition of fuzzy measure , and measure can be adopted. Under limited conditions of domain of discourse (factor set), when , if the fuzzy measure of single point set (single factor set) is determined, any measure can be obtained. As for the problem of multispectral and high-resolution image fusion, domain of discourse is simplified to be , and the evaluation factors are spectral information and spatial resolution . The fuzzy measure is and , which can be simply expressed as and ; then, , , . is evaluation index of spectral information and evaluation index of spatial resolution; then, the corresponding evaluation indexes of domain of discourse are and , which can be simply expressed as and . According to the definition of fuzzy integral, it can obtain

Sequencing is carried out to and according to the values of and , which should be recorded as and based on the sequencing order from small to large. Under this condition, there are two conditions for values: when , , then ; when , , then .

According to the definition of fuzzy integral, it can obtain

There are the following values.(1)When , the sequencing order of and from small to large is , , correspondingly, , , , ; then the fuzzy integral is .(2)When <, the sequencing order of and from small to large is , , correspondingly, , , , ; then the fuzzy integral is .(3)When , can be either of the above values in and .

During above inferring process, the operation with the utilization of “” and “” function is derived from “” and “,” and fuzzy integral value serves as integrated evaluation results. Utilization of fuzzy integral can effectively integrate spectral information index and 2 single factor indexes of spatial resolution, so as to efficiently determine the optimal weight.

3.2. Experimental Steps

MATLAB software is selected as a test tool in this test. Dmtm image and spot image are selected for the test to satisfy that the weighting coefficient of the optimal objective function is the intersection point of and ; therefore, the optimal weighting coefficient (convergence precision ) can enable the fusion image to reach the maximum spatial resolution, meanwhile, furthest reduced color distortion, while the utilization of fuzzy integral can effectively integrate these 2 single factor indexes and efficiently determine the optimal weight.

According to the two items of feature evaluation indexes defined in (2) and (3), it should be substituted into (4) to calculate the value of fuzzy integral . Draw into the coordinate system; fuzzy integral value shows a nonlinear changing process; it has a peak point, and this peak point is the curve intersection point of the two feature indexes—optimal weighting coefficient. Therefore, during the test process, the maximum value of should be successively sought, and its corresponding value is the optimal weight. It is not necessary to substitute each value to calculate the weight; therefore, the test can be finished in a convenient and efficient way.

The detailed test process is as follows:(1)respectively, register high-resolution image in the three wave bands of multispectral image dmtm;(2)respectively, carry out wavelet decomposition to the three wave bands of dmtm, so as to extract its coefficient of high and low frequency;(3)carry out wavelet decomposition to the registered spot image, so as to extract its coefficient of high and low frequency;(4)determine the optimal high-frequency coefficient with fuzzy integral;(5)carry out wavelet reverse transform layer by layer;(6)integrate the fusion images of the three wave bands.

The determination process of the maximum value of the fuzzy integral is as follows: firstly, calculate value with the step length of 0.1 during ; further calculate value with the step length of 0.01 near the maximum value of , and so on; obtain value when degree of convergence , so as to enable its corresponding fuzzy integral value to be maximum; at that time, the two single factor evaluation indexes values and are same on the whole, which satisfy the requirements of optimal objective function.

4. Experiments and Analysis

4.1. Images for Experiments

In order to prove the effectiveness of remote sensing image fusion method based on fuzzy integral, this paper selects fusion test with a 1024 1024 TM multispectral image and a 1024 1024 SPOT high-resolution panchromatic image. Figures 1 and 2, respectively, presented the two remote sensing images for fusion after spatial registration, that is, the TM multispectral image and SPOT high-resolution panchromatic image. Figures 3, 4, 5, 6, and 7 are relevant test result images. It can be seen from Figure 7 that the image after IHS based transform fusion has high definition but also has serious color distortion.

4.2. Determination of Fuzzy Measure Value

During the process of determining fuzzy measure value, the values of fuzzy measure and , respectively, represent the emphasis to multispectral image information and high-resolution image information. Therefore, different and values will influence the image fusion effect in certain degree. Three groups of and data are selected in this test for comparison; the optimal weighting coefficients obtained from it are shown in Table 1.

It can be seen from above tables that the optimal weighting coefficients determined in the 2nd and 3rd wave bands determined by different and values are consistent, and and are same on the whole, while the weighting coefficient in the 1st wave band is also very close to it. According to the balance requirements of the evaluation index of multispectral information and evaluation index of high-resolution information, is selected for the 1st wave band; then, , .

4.3. Evaluation of Image Fusion Effect
4.3.1. Categories of Performance Parameters

The performance parameters mainly can be divided into three categories: Category I means spectral preservation degree, such as distortion degree, deviation index, and correlation coefficient; Category II reflects expressive ability of spatial detail information, such as variance, information entropy, cross entropy, and definition; Category III reflects image brightness information, such as mean value [1418].

4.3.2. Evaluation Statistical Parameter of Fusion Effect

(1) Mean Value and Standard Deviation. In statistical theory, statistical mean value and standard deviation are defined as wherein is total sample number; is the th sample value.

As for some image, is the total pixels and is the grey level of the th pixel; then, the mean value is the grey level mean value of the pixel, which is reflected to be mean brightness to human eyes. If the mean value is moderate, then the visual effect will be good. Standard deviation reflects the discrete conditions of grey level compared with grey level mean value. The larger the standard deviation is, the more discrete the distribution of grey level will be. At that time, the probabilities of occurrence of all the gray levels in images will be closer and closer with each other, so as to result in that the contained information volume will be more and more close to the maximum value. The mean value and standard deviation of image after multispectral image and fusion in this test are shown as in Tables 2 and 3.

It can be seen from the table that the standard deviation of fusion image is higher than that of multispectral image, which shows that its contained information volume is more than that of multispectral image.

Comparing this test method with the combined method of maximum and consistency detection of absolute value, the variances of all the wave bands are larger than those of this method, while the larger standard deviation means that the probabilities of occurrence of all the grey levels in images more and more tend to be same, and the included information volume more and more tends to be the maximum; therefore, the information contained in the image after fusion in this test is more than the information volume of the fusion method based on the maximum and consistency detection of absolute value. Similarly, the variance data of this test is also larger than that of the energy selection method, and the contained information volume is also more than this method.

Comparing this test method with combined method of PCA and wavelet transform, the variance is slightly less than the value obtained in this method, which shows that the contained information volume is also slightly less than that of the image obtained in this method.

(2) Correlation Coefficient. The correlation coefficient of the image reflects the degree of correlation of the two images, and the transform degree of spectral information of multispectral image can be seen through comparing the correlation coefficients of the images before and after fusion increase. The correlation coefficients of the two images are defined as wherein and , respectively, are the grey levels on point () of the two images. The correlation coefficient of multispectral image and image after fusion as well as the correlation coefficient of high-resolution image and image after fusion in this test are shown in Table 4.

It can be seen from Table 4 that it balanced the information of multispectral image and high-resolution image in a better way in this test. Both the correlation coefficients in the 1st and 2nd wave bands exceeded 0.9, and the correlation coefficient of high-resolution image and fuse image in the 3rd wave band is slightly poor. In general, the correlation coefficient of multispectral image and fuse image is better, which puts particular emphasis on the information of multispectral image.

Compared with the data in Tables 5 and 6, the correlation coefficient of multispectral image and fusion image in this test data is obviously higher than that of the other two methods, but the correlation coefficient of high-resolution image and fusion image is lower than that of those two methods, which show that this test put particular emphasis on multispectral information. But it can be seen from the data in the 1st and 2nd wave bands that the correlation coefficient of both categories reaches more than 0.9, which shows that this test pays attention to the balance of the correlation coefficient of both categories. The test results agree with the theory. It can be seen from the table that the data in the 3rd wave band are generally low, but the correlation coefficient of the multispectral image and the fusion image in this test still reaches more than 0.9, which shows that the multispectral information is still kept well, while the correlation coefficient of the high-resolution image and the fusion image is slightly lower than that of the other two methods, but the difference is not large. In general, this test balanced multispectral information and high-resolution information. As for the correlation coefficient, it has better effect.

(3) Distortion Degree. The spectral distortion degree of image directly reflects the spectral anamorphose of multispectral image, and the smaller distortion degree is better. The spectral distortion degree is defined as wherein is spectral distortion value, is the size of image, and and are, respectively, the grey levels at point () on the fusion increased image and the original image. The spectral distortion degrees in this test are shown as in Table 7.

It can be seen from Table 7 that comparing this test method with combined method of maximum and consistency detection of absolute value, while all the distortion degrees reflects the anamorphose of the multispectral image and fusion result image. The less distortion degree shows the less anamorphose, while this test put the emphasis on multispectral information, and the theory coincides with the practice.

Compared with the combined method of PCA and wavelet transform, firstly, the distortion of the 2nd wave band is small, which shows that the spectral information is kept well, while that of the 3rd wave band is obviously larger than that of this method, which shows that the 3rd wave band is slightly distorted.

(4) Information Entropy. In 1948, the originator of modern information theory Claude Elwood Shannon worked out that the average information and the entropy of statistical thermodynamics have same probability mathematical expression. Therefore, he defined the average information as entropy; that is, wherein is a constant related with logarithmic system. When binary system is selected, , and when e system is selected, . The grey levels of various elements of a single image can be deemed to be independent samples; then, the grey level distribution of this image is , is the ratio between pixel number in which the grey level equals and the total pixel number of the image, and is the total grey level. As for the image histogram with the grey level range of , the information entropy is defined as

It is easy to know . When some , ; when , . Image information entropy is an important index for measuring image information richness, and the detailed expressive ability of image can be compared through the comparison to image information entropy. The entropy size reflects the information volume carried by image. The larger the entropy of fusion image is, the more the information volume contained in fusion image will be. In addition, if the probabilities of occurrence of all the grey levels in images tend to be same, then the contained information volume will tend to be the maximum. The values of information entropy in the test are shown as Table 8.

It can be seen from Table 8 that the information entropy data obtained with this test method are larger than the data obtained with other test methods, while the volume of information entropy reflects the information volume carried by image. The larger the entropy of fusion image is, the more the information volume carried by fusion image will be. The information entropy values obtained with this test method are larger than the values obtained with other test methods, which shows that the information volume contained in the fusion image of this test is larger with better fusion effect and the effectiveness of this method.

5. Conclusions

The following conclusions can be obtained through research and test in this paper.(1)The optimal image fusion method based on fuzzy integral presented in this paper combined the features of fuzzy integral and wavelet transform to carry out pixel-level optimal fusion to high-frequency coefficient, introduced fuzzy integral method to effectively integrate the spectral information and the 2 single factor indexes of spatial resolution, conveniently and efficiently determined the optimal weight, so as to enable the image after fusion to reach the maximum spatial resolution, reduced color distortion in maximum degree, which balanced the two feature indexes of spatial detail information and spectral information in fusion effect, and effectively improved the spectral information index of fusion image.(2)When the optimal fusion of multi-index is considered, the utilization of fuzzy integral can effectively integrate the multi-index factors. The feature for fuzzy integral of integrated multifactor can be used in fusion problem of multispectral and high-resolution images, and the results will be more suitable for subjective feeling of people to fusion image.(3)It is presented in this paper that the utilization of fuzzy integral can efficiently determine the optimal weight, which enables calculated amount of the algorithm to be less, and the fusion effect can satisfy the requirements in most occasions with certain use value.

In addition, in the methods presented in this paper, high-frequency coefficient is determined by optimal weight, which is based on pixel-level fusion method. The optimal fusion method for remote sensing image based on wavelet statistical property can be considered: in IHS space, wavelet fusion method is used for the high-frequency part with intensity component I to carry out high-frequency detailed feature fusion of high-frequency subband. Fuzzy integral is used for the low-frequency part to carry out fusion. Such kind of detailed processing to high-frequency part should receive better effect. Wavelet transform method is an effective algorithm in current fusion field. There are questions of how to determine the order of wavelet transform and carry out fusion of coefficients within wavelet domain in a most effective way. These contents are crucial to image fusion, which should be further researched.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.