Abstract

A new robust adaptive fusion method for double-modality medical image PET/CT is proposed according to the Piella framework. The algorithm consists of the following three steps. Firstly, the registered PET and CT images are decomposed using the nonsubsampled contourlet transform (NSCT). Secondly, in order to highlight the lesions of the low-frequency image, low-frequency components are fused by pulse-coupled neural network (PCNN) that has a higher sensitivity to featured area with low intensities. With regard to high-frequency subbands, the Gauss random matrix is used for compression measurements, histogram distance between the every two corresponding subblocks of high coefficient is employed as match measure, and regional energy is used as activity measure. The fusion factor is then calculated by using the match measure and the activity measure. The high-frequency measurement value is fused according to the fusion factor, and high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. Thirdly, the final image is acquired through the NSCT inverse transformation of the low-frequency fusion image and the reconstructed high-frequency fusion image. To validate the proposed algorithm, four comparative experiments were performed: comparative experiment with other image fusion algorithms, comparison of different activity measures, different match measures, and PET/CT fusion results of lung cancer (20 groups). The experimental results showed that the proposed algorithm could better retain and show the lesion information, and is superior to other fusion algorithms based on both the subjective and objective evaluations.

1. Introduction

The main purpose of medical image fusion is to generate a composite image by integrating the complementary information from multiple medical source images of the same scene [1]. Molecular images and anatomical images are integrated by PET/CT fusion; the fused image contains information on the pathophysiology of different modality images and improves the identifiability of the lesion areas. It not only provides images for the differential diagnosis of benign or malignant lesions and the detection rate of local space-occupying lesions but also carries out whole body imaging in tumor exploration. Medical image fusion plays an important role in clinical applications such as image-guided radiotherapy, image-guided surgical procedure, and noninvasive diagnosis, thereby helping the diagnosis and differential diagnosis, treatment planning, therapeutic monitoring, and prognostic evaluation of many serious diseases [2]. In Reference [3], by analyzing the Piella framework and multiscale analysis theory, four construction methods of pixel level fusion rules are presented on the basis of the Piella framework; a self-adaption fusion algorithm of lung cancer PET/CT based on Piella frame and DT-CWT is proposed in first fusion path.

The general framework of multiresolution image fusion method was firstly proposed by Zhang and Blum [4]. On this basis, Piella [5] has been developing and extending the framework by categorizing the key technology technologies multiresolution image fusion method into two parts, which includes multiresolution transform and fusion rule, making the multiresolution image fusion method more systematic and standardized. At present, the research on PET/CT image fusion method can be divided into two aspects. One aspect is multiresolution transformation; the fusion schemes based on multiresolution transform are widely used in practical applications. The commonly used methods based on wavelet transform are to overcome the limitations of spectral distortion, but they can only obtain limited directional information. Novel multiresolution transform-based approaches are proposed, such as ridgelet transform [6], curvelet transform [7], bandlet transform [8], contourlet transform [9], nonsubsampled contourlet transform (NSCT) [10], and shearlet transform [11]. The medical image fusion algorithm based on weighted contourlet transformation coefficient weighting is studied in [12]. In addition, the medical image fusions of NSCT transform and contourlet transform are compared [13] and the experimental results show that the NSCT transform can improve the contrast of the fused image in the fusion process. The other is the design of the fusion rule based on the Piella framework; the purposes of which are to explore how to construct the match measure and the activity measure by improving and optimizing the traditional fusion rule [14, 15].

In recent years, researchers have proposed many new methods of image fusion [1619], such as CT image feature-level fusion with rough sets using in pulmonary nodule detection [20], GA-SVM [21], COVID-19 on CT images [22], high-dimensional feature reduction based on variable precision rough set, and genetic algorithm in medical image [23]. Fusion method based on compressed sensing domain is also emerging in recent years to solve the high time complexity in the process of medical image fusion. The compressed sensing theory is applied to image fusion firstly by Wan and Canagarajah [24]. Luo et al. [25] proposed the classification-based image fusion method, with the data similarity as the adjustment term of the weights in fusion process, the standard of measuring the energy of the original image by mean observed value of observation of the fusion rule. In 2011, the superiority and effectiveness of compressed sensing theory in image fusion applications are demonstrated [26]. Medical image fusion based on compressed sensing can improve the time and space efficiency in the network transmission process and provide technical support for mobile medical services and medical treatment.

The characteristics of medical image fusion based on compression perception domain are as follows. Firstly, it is not a simple fusion based on pixel, but a small amount of sampling data fusion processing. Secondly, the characteristics of the compression measurement are different from the traditional transformation coefficient and the fusion rules are designed to be adapted. Thirdly, the time and space efficiency of image fusion is improved. Fourthly, the redundant information between different source images is reduced.

In general, specific integration framework can be summarized as follows. Firstly, the source image is transformed with the appropriate sparse representation methods. Secondly, the sparse coefficients are sampled by using the compression measurement matrix, according to the characteristics of observed value and designed fusion rules. Thirdly, the fused image is reconstructed by performing the corresponding inverse transform over the merged coefficients. The fusion process is shown in Figure 1.

In this paper, we propose a self-adaptive fusion algorithm of PET/CT based on compressed sensing and histogram distance as described in the Piella framework. The fusion rule of low-frequency coefficients in the NSCT transform domain is calculated by using the pulse-coupled neural network (PCNN) method [27]. For high-frequency images, the Gauss random matrix is used for compression measurement after NSCT transform of subbands, the regional energy is measured as the activity measure, and the histogram distance between subbands in eight directions is calculated as the matching measure. The fusion factor is calculated by using the match measure and the activity measure. According to the fusion factor, the high-frequency measurement value is fused, and the high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. By combining NSCT and compressed sensing, high-frequency subbands with sparsity are obtained after NSCT transform and the fused image can be reconstructed by a few of the observed data extracted from a large number of redundant data generated from the multiresolution decomposition. Experimental results show that the algorithm reduces the workload of high-frequency signal sampling and improves the image contrast. In addition, the fused image has good visual effect and can be improved by the objective index.

2. Method and Material

2.1. The Piella Framework

The multiresolution image fusion framework of Piella is shown in Figure 2. The two original images A and B are decomposed using the multiresolution, which has been transitioned gradually from the conventional pyramid decomposition to the wavelet transform and curvelet transform. Decomposition coefficient () is obtained by multiresolution transform, and the fusion rules of the decomposition coefficient are summarized as four modules outlined in the dashed box in Figure 2: match measure, activity measure, decision module, and combination module. The match measure () is used to measure the matching and similarity between two original images (); the activity measure () is used to extract the feature information and highlight different parts. The activity and match measures are used in the decision module, and the degree of similarity and feature information of the decomposition coefficient are obtained by the decision factor . The decomposition coefficient of the fusion image is obtained by synthesizing the image decomposition coefficients according to the decision factor , and the fusion image F is obtained by multiresolution inverse transform of .

2.2. Image Fusion Based on NSCT and Compressed Sensing

NSCT was proposed by Cunha in 2006. It can be constructed through double filter banks approach, nonsubsampled pyramid structure (NSP), and nonsubsampled directional filter banks (NSDFB) [28]. The source image is filtered firstly by passing through NSP filter to get subband images with equal size to the source images assuming that the NSP decomposition is with level. After NSP decomposition, the obtained high-frequency subbands are subjected to transformation in direction at level by passing through filter composed of iterated two-channel NSDFB to generate high-frequency images. Therefore, a low-frequency subband and high-frequency subbands can be obtained after the NSCT decomposition.

In the compressed sensing theory, the original signal can be recovered by a small amount of observation data, which are applied to the image to extend the one-dimensional signal to the operation of the two-dimensional matrix. Assuming that the observation matrix is used to measure the image , the observation vector is obtained.

In this process, the image data from the dimension are reduced to the observation data of dimension, and the compressed sampling is realized [29]. However, the prerequisite for data compression is to satisfy the prior condition of sparsity, i.e., the sparse representation can be obtained by orthogonal basis transformation or tight frame transformation. where is the representation of image in the domain in Equation (2).

If the nonzero of is much smaller than , that image is sparse and is the image sparse coefficient in the domain. In this paper, we take the NSCT as the sparse basis of original image. Equation (3) can be obtained by transformation of Equations (1) and (2).

Among them, is the sensing matrix; the number of equations is far less than the number of unknowns (); there is no determined solution for the equation. Since the signal is sparse (), the uniqueness of the solution can be warranted if the sensing matrix satisfies the restricted isometry property (RIP). For any given signal with sparse and constant , the following expression should be met.

In Equation (4), constant is known as the RIP constant [30]. The image is subjected to sparse transformation and measurement matrix, and the fusion rule is designed to fuse the compressed measurement value. The high-dimensional image is restored from the fused measurement value using reconstruction algorithm. The above process is transformed into a minimum 1 norm problem and expressed mathematically as follows:

This is a convex optimization problem in mathematics [31]. A convex relaxation algorithm can be used to solve the norm optimization problem and total variation (TV) optimization. In addition, the signal can be reconstructed by other methods, such as relaxation of norm to norm followed by optimization problem solving, or reconstruction of image using Bayesian method on the basis of introducing sparsity by prior distribution.

Taken together, we can get one low-frequency subband and some high-frequency subbands after NSCT transform and different image fusion methods should be applied to high- or low-frequency subbands. Since high-frequency subbands of NSCT transform usually contain multidirectional image information, a large amount of computations are required in the process of fusion, thereby making the process time-consuming and ineffective. In contrast, the combination of NSCT and compressed sensing can significantly reduce the amount of computation and space of data storage in image fusion.

2.3. Self-Adaptive Fusion Algorithm of PET/CT Based on Compressed Sensing and Histogram Distance
2.3.1. Algorithm Idea

By analyzing the features of PET and CT images, an adaptive fusion algorithm of PET/CT fusion based on compressed sensing and histogram distance is proposed. The main steps of the algorithm are as follows:

First step: monolayer NSCT transform of PET and CT source images is registered (PET for image A, CT for image B) and a low-frequency subband () and eight high-frequency subbands (, is the direction number: ) in different directions are obtained.

Second step: mainly contains the low-frequency signal with poor sparsity. In this paper, the fusion rule of the PCNN is used to fuse the low-frequency and get fusion image since PCNN has high sensitivity to the featured region of an image.

Third step: mainly contains the detailed information of the original image with higher sparsity, and a higher reconstruction accuracy can be obtained by compressive sampling. Therefore, Gauss random matrix is used for compression measurement of to get the measured value .

Fourth step: the fusion rule of is determined according to that used for the Piella framework: (1)Match measure: is divided into blocks and the histogram distance between the blocks is calculated, getting the match measure (2)Activity measure: the regional energy of is calculated and used as the activity measure (3)Decision module: the fusion factor of the self-adaptive decision model is calculated using and (4)Combination module: the measured value is fused based on fusion factor and the fusion measurement value is obtained. The high-frequency image is then reconstructed using the orthogonal matching pursuit algorithm

Fifth step: the final fusion image F is obtained by NSCT inverse transform of and . The framework of fusion algorithm is shown in Figure 3.

2.3.2. Lowpass Subband Fusion Rule

The gray levels of the PET and CT images are different and usually of mutually exclusive property since the imaging mechanisms are different between PET and CT. Therefore, a PET scan shows the metabolically active malignant lesion as a dark spot, while CT scan provides detailed images of bones and organs inside the body. The low-frequency image of the source image obtained by the NSCT transform mainly contains the approximate components of the source image with very low sparsity. If the random measurement matrix is used for compressive sampling, the reconstruction accuracy of the fusion of low-frequency subbands will be affected. Since the low-frequency subband image mainly shows the background information, the fusion rule of PCNN based on the fact that human visual system is more sensitive to the featured objects or regions is employed to highlight the lesions in the whole image.

The PCNN of a single neuron is composed of a receiving section, a modulation section, and a pulse generator as shown in the following [29]: where is the feedback input, is the link input, is the dynamic threshold, is the internal activity term, is the pulse output, is the neuron label, and is the number of iterations. is the external stimulus, and are the synaptic connection weights between neurons, i.e., the strength of the connection between the feedback domain in neuron and the input domain in neuron . , , and are the time decay constants; , , and are the coefficients of method, and is the coefficient of internal active connection.

The specific steps for applying PCNN to the low-frequency image fusion are as follows: (1)The low-frequency coefficient of the source images obtained by the NSCT transform is used as the input of the neuron(2)A neuron pulse is generated according to Equation (6) and the number of ignition is calculated and used as the basis for the selection of low-frequency fusion coefficient. The formula is as follows:(3)The low-frequency subband coefficients with more ignition times are selected as low-frequency fusion coefficients, and the low-frequency fusion image is obtained as follows:

2.3.3. Highpass Subband Fusion Rule

Definition 1. Histogram distance refers to the accumulated value of the difference between two histograms, i.e., the sum of the difference in frequency between gray scales corresponding to two images.

In this paper, the histogram distance between the corresponding blocks of high-frequency coefficients is calculated. Assuming the numerical interval of a coefficient block is , then the histogram function of this coefficient block is , where is the total number of coefficients in this coefficient block, is the th sum of numerical value in the coefficient block, and is the th numerical value, , .

Histogram distance is used to evaluate the similarity between frames in video image processing. Prompted by this, in this paper, distance histogram is introduced to the similarity measure of high-frequency subbands. Since high-frequency subbands mainly contain the detailed characteristics and edge information of the image, therefore, the high-frequency subbands obtained by multiresolution conversion of images of lung cancer are of multidirectional characteristics and structural similarity. In this paper, image is blocked with high-frequency coefficients and the histogram distance between the corresponding coefficient blocks is used to determine the similarity of the high-frequency decomposition coefficients. The calculation procedure of histogram distance is shown in Figure 4.

The specific steps are as follows:

First step—compressed measurement: linear measurement of high-frequency subband coefficient is performed using Gaussian random measurement matrix and the measurement value of the corresponding subband coefficient is obtained.

Second step—computation of match measure: the high-frequency subband is divided into blocks with the size of . The coefficient blocks are extracted in accordance with the principle of top to bottom, left to right. The histogram distance between the high-frequency subbands of the two source images A and B is calculated according to Equation (9) and the obtained is used as the match measure.

In Equation (9), is the histogram of the coefficient blocks of high-frequency subbands of the source image; the numerical interval of the coefficient block is , and is the th numerical value. The smaller the value, the higher the similarity between the two coefficients and the smaller the difference in histogram frequency of coefficient blocks. In contrast, the bigger the value, the lower the similarity between source images A and B.

Third step—calculation of active measures: regional energy can better maintain correlation between two images and retain original useful information, thereby generating fused images with better visual effect. Thus, the regional energy is used as the active measure of the high-frequency fusion rule. Neighborhood division of high-frequency subbands by is conducted by defining a neighborhood window to calculate the regional energy with the following equation: where is the energy of the coefficient points in the neighborhood range of and is the neighborhood window used to calculate regional energy of high-frequency subbands.

Fourth step—self-adaptive decision module: the match measure and activity measure are used to construct the self-adaptive decision model and calculate the fusion factor for the high-frequency subbands. The threshold is set as follows: it is assumed that if , the similarity of coefficient blocks is higher, then the fusion factor of low-frequency subbands is calculated by using the self-adaptive weighted calculation with regional energy. If , the difference in regional energy between the two high-frequency subbands is significant, then the high-frequency subbands with higher energy are used as the high-frequency subband coefficients of fusion image. Because the histogram distance of the high-frequency coefficient blocks of the two source images is quite different, the mean value of the histogram distance between the coefficient blocks is used as the threshold value in order to make it more flexible. Self-adaptive threshold setting can improve the accuracy of decision-making module. Decision factor is calculated as follows:

Fifth step—combination module: after the determination of the match measure and activity measure, as well as the calculation of the decision factor , the high-frequency measurement value is then determined, thereby getting the combination module. The specific expression of the combination module is as follows: where and are the measured values of high-frequency subband coefficients of source images A and B after multiresolution decomposition. is the high-frequency fused measurement value of the fusion image F.

Sixth step—reconstruction and recovery: the orthogonal matching pursuit algorithm is used to reconstruct the fused measurement value to get the high-frequency fusion image . The final fusion image F is obtained by NSCT inverse transform of and simultaneously.

3. Discussion and Conclusion

3.1. Experimental Environment
3.1.1. Hardware Environment

The hardware platform used for the simulation experiment is Dual-Core (R) CPU E6700 Pentium, 3.2 GHz, 2.0 GB memory, with the operating system of Windows 7.

3.1.2. Software Environment

The software environment used is R2012b MATLAB version.

3.1.3. Experimental Data

As shown in Figure 5, the CT and PET images with the size of were from two groups of registered patients with lung cancer.

3.1.4. Parameter Settings of NSCT Transform

The filter level is set to 1 and the direction of the series is set to 3, in which the NSP structure uses the double orthogonal wavelet for decomposition and NSDFB uses the trapezoidal filter.

3.2. Experimental Results and Analysis

In order to verify the superiority of the proposed algorithm, the proposed algorithm was compared with other fusion methods, including the traditional pixel image fusion methods—maximum method, minimum method, and weighted average method; image fusion methods based on compressed sensing—compressed sensing image fusion based on wavelet transform (W-CS) and compressed sensing image fusion based on contourlet transform (CT-CS). On this basis, a further experiment was conducted to study the effect of the activity measure and the match measure on the Piella framework, and to analyze the effect of different active measures and match measures on the performance of PET/CT image fusion.

The evaluation of the fusion image includes subjective evaluation and objective evaluation. Subjective evaluation is the most reliable for image quality inspection, especially in medical image fusion, which plays an important role in helping doctors make diagnosis. However, it is not easy to conduct a subjective evaluation since it not only requires equipment and strict working conditions but also requires a close cooperation of related persons. Therefore, those parameters for objective evaluation of the quality and performance of the fusion image were employed in this paper, including standard deviation (SD), average gradient (AG), spatial frequency (SF), peak signal-to-noise ratio (PSNR), information entropy (IE), mutual information (MI), and edge preserving quantity ().

3.2.1. Experiment One: Comparative Experiment of Fusion Methods

Our proposed algorithm in this paper was compared with several other algorithms, including maximum method, minimum method, weighted average method, compressed sensing image fusion based on wavelet transform (W-CS), and compressed sensing image fusion based on contourlet transform (CT-CS). Among these algorithms, the sparse transformation matrix of W-CS is weighted by the weighted average method as the fusion rule and the low-frequency subbands in CT-CS are weighted by Gauss membership function as the fusion rule. For the high-frequency subbands, a method based on average gradient and regional energy is used to fuse the high-frequency measurement. In the fusion method based on compressive sensing, the measurement matrix is Gauss random matrix and the reconstruction algorithm is orthogonal matching pursuit algorithm, with the sampling rate of 50%. Figure 6 shows the fusion results of six methods.

As shown in Figure 6, grayscale fluctuations of the fusion images obtained with the simple pixel-level image fusion methods were generally smaller. For example, the pixel value of the fusion image obtained with the minimum method was lower and the brightness of the bone was dark. The pixel value of the fusion image with the maximum method was higher and the contrast of lesions was high, with a severely damaged spatial resolution. The fusion image obtained with the average weighted method, which calculates the median of maximum and minimum, was weakened and the lesions could not be accurately identified. While the W-CS and CT-CS methods had a twofold compression (50% reduction in file size) of the amount of data relative to the source images, the fusion quality is not high. The fusion image obtained with the W-CS method, for example, exhibited a water wave-like pattern horizontally, with a fuzzy texture and a blurred contour. The phenomenon of slight spectral overlap was observed in the fusion image obtained with the CT-CS method, which was resulted from the contourlet transform. The fusion method proposed in this paper could not only show clearly and completely the metabolic function of the lesions and the surrounding tissues but also increase the contrast between bone and soft tissues and organs. In contrast to the traditional fusion methods, the proposed method in this paper could reduce the fusion operation of the high-frequency image with dimension and the reduction of the dimension of the data by compressed measurement greatly reduces the amount of data in the image fusion. The combination of compressed sensing and multiresolution transform can achieve the purpose of reducing the storage space in the process of image fusion.

The histogram of objective evaluation results for image fusion is shown in Figure 7. It could be seen that the proposed algorithm is superior to the other five methods in the evaluation of the objective index. With 50% of the sampling rate, the standard deviation (SD), average gradient (AG), spatial frequency (SF), and the peak signal-to-noise ratio (PSNR) of the proposed algorithm fusion image were higher than those of other five algorithms. Regarding the information entropy (IE), W-CS and CT+CS fusion algorithms were better than the proposed algorithm, which was caused by the fact that the fusion image of W-CS showed significant blur in some area and the contourlet transform could produce the phenomenon of spectral overlap, with a big change in gray level and a large edge fluctuation in the fusion image, thereby resulting in a larger quantity of image information. Except for the information entropy, the index values of the proposed fusion method were better than those of CT+CS and W-CS algorithm. Therefore, the multiresolution transform could make the image more sparse than the wavelet transform, producing a fusion image with a more detailed and complete information.

3.2.2. Experiment 2: Comparative Experiment of Activity Measure

The similarity between CT and PET was measured by the histogram distance which was used as the match measure, and the commonly used methods in image fusion, including energy, gradient, variance, and signal intensity, were used as the activity measure to investigate the effect of different activity measures on the fusion performance. Experimental results are shown in Table 1.

As shown in Table 1, overall, the changes in activity measure had no significant effect on the final result of the image fusion and there was no significant difference among the seven evaluation indexes. However, the standard deviation (SD), the average gradient (AG), spatial frequency (SF), and mutual information (MI) of the proposed algorithm were the highest of the fusion results in the four different active measures. When signal intensity was used as the activity measure, the peak signal-to-noise ratio (PSNR) and information entropy (IE) were the highest. The edge preserving quantity () was the highest when regional variance was used as the activity measure. Therefore, with the same match measure, it still can be concluded that the activity measure based on the regional energy has a better stability and a wide applicability and that the activity measure based on regional gradient has the least impact on the performance of image fusion.

3.2.3. Experiment 3: Comparative Experiment of Match Measure

On the basis of experiment 2, the regional energy was chosen as the active measure to compare the PET and CT images. The gradient ratio, energy ratio, signal intensity ratio, structural similarity, all of which are commonly used to describe the similarities of images, and the proposed histogram distance were used as match measures to compare their influence on the performance of image fusion. The experimental results are shown in Table 2.

The standard deviation (SD), average gradient (AG), spatial frequency (SF), peak signal-to-noise ratio (PSNR), and mutual information (MI) were the highest when histogram distance was used as the match measure. In comparison, the information entropy (IE) and the edge preserving quantity () ranked the highest when the signal intensity ratio was used as the match measure. Therefore, it could be concluded that the match measure based on the histogram distance has a better stability and a wide applicability with the same activity measure; this is because that the match measure has the adaptability, enabling the fusion image to better integrate the redundant and complementary information of the source image and strengthen the ability of extract information from the source image. Taken the results of experiments 2 and 3 together, compared with other active measures and match measures, the proposed image fusion method based on the combination of regional energy and the histogram distance improves the performance and quality of image fusion, and has certain practical value.

3.2.4. Experiment 4: PET/CT Fusion Results of Lung Cancer (20 Groups)

To further validate the effectiveness of this algorithm, a simulation experiment of 20 patients’ PET and CT images with lung cancer was performed. Image size is and compared with maximum method, minimum method, weighted average method, compressed sensing image fusion based on wavelet transform (W-CS), and compressed sensing image fusion based on contourlet transform (CT-CS). The fusion results are shown in Table 3.

For these six methods, the objective evaluation is further carried out, indicators including standard deviation (SD), average gradient (AG), peak signal-to-noise ratio (PSNR), information entropy (IE), and edge preserving quantity (). The evaluation indicators of these six methods were compared, respectively.

As can be seen in Table 4 and Figure 8, compared with weighted average method, compressed sensing image fusion based on wavelet transform (W-CS), and compressed sensing image fusion based on contourlet transform (CT-CS) method, the standard deviation values of proposed algorithm are the largest. The fusion results of the proposed algorithm are as follows: the 16 groups’ standard difference value of fused image is higher than the maximum method (the proportion is 80%); the 17 groups’ standard difference of fused image is higher than the minimum method (the proportion is 85%).

As can be seen in Table 5 and Figure 9, compared with minimum method, weighted average method, and compressed sensing image fusion based on wavelet transform (W-CS), average gradient value of proposed algorithm is the biggest. The fusion results of the proposed algorithm are as follows: the 14 groups’ average gradient value of fused image is higher than compressed sensing image fusion based on contourlet transform (CT-CS); the proportion is 70%. The 19 groups’ average gradient value of fused image is higher than the maximum method; the proportion is 95%.

As can be seen in Table 6 and Figure 10, compared with minimum and maximum method, weighted average method, compressed sensing image fusion based on wavelet transform (W-CS), and compressed sensing image fusion based on contourlet transform (CT-CS) method, PSNR results of fused image of proposed algorithm were the highest; next is the minimum method; the PSNR of maximum is the least.

As can be seen in Table 7 and Figure 11, compared with weighted average method, compressive sensing image fusion based on wavelet transform(W-CS), and compressed sensing image fusion based on contourlet transform (CT-CS) method, information entropy of proposed algorithm is the largest, and it is close to minimum method; the information entropy of maximum method is greater than the proposed algorithm.

As can be seen in Table 8 and Figure 12, compared with weighted average method, compressed sensing image fusion based on wavelet transform (W-CS) method, and compressed sensing image fusion based on contourlet transform (CT-CS) method, the edge preserving quantity of fused images is the largest. The fusion results of the proposed algorithm are as follows: the 15 groups’ value of fused image is higher than maximum method; the proportion is 75%. The 19 groups’ value of fused image is higher than minimum method; the proportion is 95%.

In summary, proposed algorithm got a better effect both from subjective evaluation and objective evaluation, already obtained higher amount of information of fusion image, extracted the useful information in original image, and showed a significant comprehensive advantage; the extraction and synthesis of useful information in original images showed a significant advantage, which fully reflects the advantages of compressed sensing and nonsubsampled contourlet transform. Fusion images effectively combine the functional information and anatomical structure of CT image and the physiological and pathological information of PET image in patients with lung cancer. It is good for doctors to analyze and judge the lesions, and provide effective imaging information for clinical work, surgery, and disease diagnosis.

4. Conclusions

In this paper, a fusion rule, which is a self-adaptive fusion algorithm of PET/CT based on compressed sensing and histogram distance, is proposed. Firstly, the NSCT transform is performed on the PET and CT images. The fusion rule of the PCNN, which has a higher sensitivity to low-frequency image, is then used to highlight the lesions of the image. Secondly, the Gauss random matrix is used to obtain the measured values of the high-frequency subbands, the histogram distance of the high-frequency subblocks is used as the match measure, and the regional energy of the high-frequency subbands is used as the activity measure. The fusion factor is calculated by using the match measure and the activity measure; the high-frequency measurement value is fused according to the fusion factor, and the high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement. Thirdly, the final fusion image is acquired through the NSCT inverse transformation of low-frequency fusion image and high-frequency fusion image. Finally, four experiments’ results show that the algorithms are better than other algorithms.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work made an oral report in the Workshop 1: Adversarial Machine Learning and IEEE WCCI2020. It is supported by the Natural Science Foundation of China (Grant No. 62062003), the Key Research and Development Project of Ningxia (special projects for talents) under Grant No. 2020BEB04022, and the North Minzu University Research Project of Talent Introduction under Grant No. 2020KYQD08.