Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016 (2016), Article ID 5637306, 12 pages
http://dx.doi.org/10.1155/2016/5637306
Research Article

Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules

1School of Mechatronic Engineering and Automation, Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai 200072, China
2Shanghai Electric Group Co., Ltd., Shanghai 200072, China
3Shanghai Electrical Apparatus Research Institute (Group) Co., Ltd., Shanghai 200063, China

Received 20 April 2016; Revised 31 August 2016; Accepted 25 September 2016

Academic Editor: Sergio Teggi

Copyright © 2016 Yingzhong Tian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS) and the Sum Modified Laplacian (SML). Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.

1. Introduction

Since the optical lens has limited depth of focus (DOF), it is difficult to obtain an image of an object with every part in the field focused. Due to the insensitivity of the human eyes to unsharpness or blurriness within the DOF of the lens, only the objects within the DOF appear to be in focus while objects out of the DOF are blurred [1]. The solution is to use multifocus image fusion to extract in-focus information from each partially focused image into one fully focused image, which is better for visual perception and further calculation. This technology has been extended to various fields, such as microscopic imaging, visual inspection, 3D shape recovery, and measurement [2]. Thus, it attracted the attention of lots of researchers, who presented a variety of fusion algorithms.

Existing image fusion methods can be categorized into fusion methods on pixel, feature, and decision level [3]. In this paper, only pixel level methods are discussed due to their high accuracy and less information loss, which is better for microscopic imaging. Fusion methods on pixel level are subdivided into spatial domain and transform domain fusion algorithms [2, 4]. Transform domain algorithms, namely, the multiresolution algorithms, are more robust since the human visual system deals with information in a multiresolution way, which is in line with the processing principle of transform domain algorithms [5]. So, multiresolution algorithms can bring about higher precision, while spatial domain methods are commonly used in cases when rapidity is required. Transform domain fusion methods have two major research hotspots: pyramids algorithm [2, 6, 7] and DWT algorithm [2]. Presently, the DWT based fusion algorithm is a research hotspot with better fusion effect [5]. The DTCWT proposed by Selesnick et al. [8] is a relative improvement on DWT, with several important additional properties: approximate shift invariance, good directional selectivity, and limited data redundancy. In this Q-shift solution [8], the construction process of filters is largely simplified. Image fusion with guided filtering was presented by Li et al. [9]. Yin et al. presented a method of image fusion and superresolution using sparse representation [10]. Yang and Li reported an image fusion method on pixel level with simultaneous orthogonal matching pursuit [11].

In this work, we have proposed an effective fusion rule for multifocus image fusion after using Q-shift DTCWT and performed this method over partially focused image sequences blurred by Gaussian operators and microscopic image sequences. The proposed method is compared with DWT and different pyramid algorithms. Some common objective metrics [12, 13] are used to analyze the performance of these algorithms and different fusion rules.

The following renders the framework of this paper. Section 2 deals with the preliminary details of Q-shift DTCWT. Section 3 describes the proposed approach of image fusion. Section 4 is about the fusion rules applied in the proposed approach. In Sections 5 and 6, the experimental results, performance comparisons, and time consumption are given. Finally, conclusions are drawn in Section 7.

2. Q-Shift DTCWT Related Theory

Aiming to solve the drawbacks caused by the DWT method, Kingsbury proposed Complex Wavelet Transform (CWT) method when conducting image fusion experiment. In the working process of CWT method, the plural form is needed, and perfect reconstruction filter banks should be built based on the plural form. It is relatively easy to construct such filter banks on the first decomposition layer, but it is much harder on more upper layers. In 1999, Kingsbury proposed a new form of DTCWT, which can both keep the advantages of CWT and achieve the function of perfect reconstruction [8]. DTCWT is a novel wavelet transform method using a pair of binary-structured trees to realize wavelet transforms on real and imaginary components of the signal in parallel. The principle of DTCWT decomposition process on a two-dimensional signal on level is shown in Figure 1, while the reconstruction process is not presented as it is just an inverse transform of the decomposition one.

Figure 1: Decomposition diagram of the Chinese character “wood” on level using Q-shift DTCWT.

DTCWT algorithm gains a series of advantages due to its unique structure, which are approximate shift invariance, good directional selectivity, and limited data redundancy. Due to its shift invariance, images fused by DTCWT are smooth and continuous while images fused by DWT contain irregular edges. Another major superiority of DTCWT is good directional selectivity since DTCWT produces six subbands at each scale for both imaginary and real parts in (±15°, ±45°, ±75°), while DWT only presents limited directions in (0°, 45°, 90°), which improves the transform precision and keeps more detailed information. However, in DTCWT, the process of designing filters is a little complicated due to its requirement for satisfying simultaneously the biorthogonal conditions and the phase condition [8]. Q-shift DTCWT is a solution to improve this problem as its quarter shift filters produce complex wavelets that are exactly linear phase, which largely simplifies the filter construction process.

The transform uses two parallel treelike filter series (Tree A and Tree B in Figure 1, generating the real and imaginary parts, resp.) that realize complex wavelet filtering of the input signal. In order to achieve shift invariance, filters are designed to own different delays, ensuring that every filter can sample the values discarded by other filters because of downsampling. In this way, aliasing phenomenon is minimized. The trees are implemented on rows and columns of the signal, respectively, and generate two low frequency coefficients and , containing rough information, and six high frequency coefficients , containing detailed information in different directions. In Figure 1, character “wood” is decomposed into eight subimages; two low frequency coefficients appear to be similar to the original one while the other six are the extracted detailed information in different directions. The two low frequency images are used as the input signal of the next level decomposition.

3. The Proposed Fusion Methods

The general image fusion scheme using DTCWT for both grayscale and color images is shown in Figure 2. Since DTCWT can only be calculated on monochrome images, color images should be decomposed into subimages. As the commonly used RGB color model creates color distortion due to the high correlation among the R, G, and B components, we transform it into YIQ color model. Since Y (luminance) component contains much more information than I and Q (chrominance) components [14], we perform the DTCWT fusion only on Y components, while I and Q components are fused based on a mapping table corresponding to Y fusion. Finally, the images in YIQ color model are transformed in RGB color model for display. In case of dealing with grayscale images, the above steps are omitted and DTCWT fusion is performed directly on the source images.

Figure 2: Schematic flowchart of the proposed fusion method.

Detailed steps of DTCWT fusion process are as follows:(1)Perform DTCWT decomposition process on the input images a to k, which are focused in different parts. Each image is decomposed into two low frequency coefficients (La1, La2) to (Lk1, Lk2) and some high frequency coefficients (Ha1, Ha2,…, Ha) to (Hk1, Hk2,…, Hk). The high frequency coefficients represent detailed information of the input image in different directions, and their number depends on the total decomposition level.(2)Fusion rules are performed on low frequency coefficients and high frequency coefficients separately. More details are presented in the next section.(3)The selected high frequency and low frequency coefficients are then adjusted with consistency check. In multifocus images, blurred and sharp areas are generally regionally connected. For example, if the surrounding pixels are in focus, then the single pixel inside must be in focus too. Consistency check [15] is aimed at picking and changing these single pixels to be consistent with surrounding ones.(4)Finally, DTCWT reconstruction process is implemented on the final low frequency coefficients L1 and L2 as well as high frequency coefficients H1 to H to construct the fused image.

4. Fusion Rules

Fusion rule is the most important factor of the proposed fusion methods as it is the core of distinguishing focused region from defocused region. In the proposed method, low frequency coefficients and high frequency coefficients represent rough information and detailed information, respectively, requiring different fusion rules [16]. Several fusion rules are described in this section including pixel based fusion rules such as the Weighted Average (WA) method [17], the Synthesis image Module Value Maximum Selectivity (SMVMS), and regional based fusion rules like the Neighborhood Variant Maximum Selectivity (NVMS) [18], the Neighborhood Gradient Maximum Selectivity (NGMS) [19], and the SML [20]. Details of these fusion rules are discussed below.

4.1. The Weighted Average (WA) Method

WA is normally used in low frequency coefficients as differences in low frequency information are comparatively small. The final coefficient is defined as follows:where to are pixel values of source low frequency coefficients in the location and to are weights for to , generally .

4.2. The Neighborhood Variant Maximum Selectivity (NVMS)

NVMS is defined as follows: calculate the standard variance of the pixel in the th picture in the location: where are the window sizes, generally or . is the average value which is given by

The final coefficient is equal to the th pixel and is defined by picking the biggest from all computed values. Hence,

4.3. The Neighborhood Gradient Maximum Selectivity (NGMS)

NGMS is defined by which is formed as follows:

is equal to the kth pixel and is decided by the biggest . Hence,

4.4. The Sum Modified Laplacian (SML) Method

The Sum Modified Laplacian is given bywhere denotes the step length and is the kth pixel and is defined by the biggest . Hence,

4.5. Synthesis Image Module Value Maximum Selectivity (SMVMS)

SMVMS is used in high frequency coefficients as high frequency information is generally bigger in focused area. In order to make the time consumption short, a synthesis image ( is the decomposition level) is formed by combination of coefficients in six directions, and the fused coefficient is equal to the maximum which is calculated as

5. Experimental Results and Analysis

The proposed algorithm has been operated on Matlab 2014a with an Intel i5 4590 processor with 4 GB RAM. The traditional fusion methods based on DWT [16] and pyramids algorithms such as Gradient Pyramid (GRAD), Morphology Pyramid (MOP), Ratio Pyramid (RAT), and Laplacian Pyramid (LAP) [2, 13] are presented as compared to the proposed one. The DWT method, pyramids methods, and the proposed Q-shift DTCWT methods are all decomposed on 5 levels. In the aspect of fusion rules, traditional methods all take the rule of low frequency WA and high frequency SMVMS. For the proposed one, five different fusion rules described in the last section are used, and different matches are shown in Table 1. DTCWT1 is for comparing the performances among Q-shift DTCWT and the traditional methods. The remaining four are improved methods with better performance. Low frequency information has high correlation with the adjacent ones, so it is natural to reach an inference that more regional based fusion rules could improve its performance.

Table 1: Results of IQA indicators on fused images.

In DTCWT2 to DTCWT4, three regional based fusion rules are applied on low frequency coefficients to testify this inference and, in DTCWT5, the regional based NVMS is used both on low frequency and on high frequency coefficients for potential improvement.

5.1. Experiments on Standard Images

In the first experiment, the performance of the proposed fusion method is demonstrated by fusing 20 pairs of blurred images which are generated by filtering the source images shown in Figure 3 with Gaussian filter. Due to space reasons, only six pairs are displayed but the others are similar to the presented ones. In each of these pairs, complementary regions of the source images are blurred. The source images are standard grayscale or color images and are taken as the ground truth images, serving as the templates for comparison.

Figure 3: Standard grayscale and color images for testing. (a) Peppers. (b) Couple. (c) Airplane. (d) Balloon. (e) Lena. (f) Flowers.

We have found that it is difficult to evaluate the quality of fused images visually, especially when the differences are too small to be observed. Hence, objective methods are used for more scientific and accurate evaluation. The first objective evaluation method is pixel value subtraction. The difference image is obtained from subtracting ground truth image from the fused image. The subtraction results of “Peppers” are shown in Figure 4 consisting of the ground truth image (Figure 4(a)), the partially blurred images (Figure 4(b) blurred in the middle Pepper and Figure 4(c) blurred in surroundings), subtraction results of the blurred images (Figures 4(d) and 4(e)), and subtraction result of Gradient Pyramid (Figure 4(f)), Morphology Pyramid (Figure 4(g)), Ratio Pyramid (Figure 4(h)), Laplacian Pyramid (Figure 4(i)), DWT (Figure 4(j)), and DTCWT1 to DTCWT5 (Figures 4(k)–4(o)).

Figure 4: Subtraction results of “Peppers.” (a) The ground truth image. (b) Source image blurred in the middle Pepper. (c) Source image blurred in the surroundings. (d) Subtraction image of (b). (e) Subtraction image of (c). (f) Subtraction image with Gradient Pyramid. (g) Subtraction image with Morphology Pyramid. (h) Subtraction image with Ratio Pyramid. (i) Subtraction image with Laplacian Pyramid. (j) Subtraction image with DWT. (k) Subtraction image with DTCWT1. (l) Subtraction image with DTCWT2. (m) Subtraction image with DTCWT3. (n) Subtraction image with DTCWT4. (o) Subtraction image with DTCWT5.

From analyzing the color of these subtraction images according to the color bar, we can judge the deviation between each fused image and the ground truth image. Obviously, subtraction results of GRAD and MOD bear relatively poor effect and those of RAT and LAP are better but perform extremely poor in edges. DWT performs better than the pyramids methods, and, by comparing DWT and DTCWT1, it can be concluded that DTCWT is better than DWT under the same fusion rules. But, with regional based fusion rules in low frequency coefficients, as in DTCWT2 to DTCWT4, the fused results are much better. Finally, in DTCWT5, where both low frequency and high frequency use regional based NVMS fusion rule, the deviation mainly focuses on the edge of the middle Pepper.

Another objective evaluation method is called the quantitative metrics [21]. Quantitative metrics are the objective evaluation indicators which can overcome the influence of human’s inaccurate vision judgment and make the indicators mathematically evaluate the effectiveness of image fusion. Three quantitative metrics are applied here, including mutual information (MI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE), which are described as follows.

5.1.1. Mutual Information (MI)

Let , and denote the normalized histogram of the source image , source image , and fused image, respectively. and denote the joint histogram between the fused and source images. and denote the row size and column size of the image. Mutual information between source images , and resulting fused images is as follows:

The total mutual information is defined as

Larger values imply better image quality.

5.1.2. Peak Signal-to-Noise Ratio (PSNR)

where , denote, respectively, the th pixel value of the reference image (ground truth image) and fused image and is the count of gray levels in the image. PSNR value is higher if the ideal and fused images are more approximate. Higher value indicates better fusion.

5.1.3. Root Mean Square Error (RMSE)

RMSE between the reference image and the fused image can be calculated as

RMSE approaches zero when the reference and fused images are very approximate and it will increase when the similarity decreases.

The values of MI, PSNR, and RMSE of the fused images with various fusion methods are shown in Tables 24 and the best values are marked in bold. It can be seen that DTCWT2, DTCWT4, and DTCWT5 show better performance while being analyzed with MI. The PSNR values of LAP, DTCWT1, DTCWT2, and DTCWT3 methods are superior to the others. When calculated with RMSE, the best values belong to RAT, DTCWT3, and DTCWT4. Since DTCWT based fusion methods bear the best evaluation values in most situations, it can be concluded that the proposed fusion methods are superior to the existing pyramids and DWT methods while the fusion rules could be chosen in accordance with specific conditions, which will be studied in the future work.

Table 2: Performance comparison of the standard images with MI.
Table 3: Performance comparison of the standard images with PSNR.
Table 4: Performance comparison of the standard images with RMSE.
5.2. Experiments on Microscopic Images

In the second experiment, the fusion process is performed on two groups of images captured by a metallographic microscope. As described in Section 1, only the objects within the DOF of the objective lens of the microscope appear to be in focus. In reality, observed specimens within the field sometimes have rough surfaces which could not be completely captured in the DOF in one shot. A method is proposed to solve the problem in which several pictures partially focused are captured by adjusting the height of the lens and hence different parts are within the DOF. Then, fusion methods presented above are applied on the image sequence to obtain an entirely clear image.

Since metallographic microscopes are widely used in machinery, electronics, chip, chemical, precision instrument, and other industries for observing and analyzing surface quality of opaque matters, two examples, namely, the detection of Printed Circuit Board (PCB) and worn external turning tools, are presented in this paper with a metallographic microscope at 5x magnification and a CCD camera connected to a computer as our experimental devices. Fusion results are analyzed both visually and objectively.

In the first PCB experiment, since the components and their joints are uneven, it is hard to capture a totally clear image in one shot. But, with the methods proposed in this article, completely focused images can be acquired. Three image sequences of PCB are shown in Figures 5, 6, and 7, but, due to space reasons, only sequence 1 is displayed in detail. The PCB we used is shown in Figure 5(a) and the inspected area is marked in red blocks. In Figure 5, we can see three source images focused on board (Figure 5(b)), the solder paste (Figure 5(c)), and the component pin (Figure 5(d)) and ten fused images (Figures 5(e)5(n)) using various fusion methods including GRAD, MOP, RAT, LAP, DWT, and DTCWT1 to DTCWT5. For better observation, a small part as shown in the red spotted line block in Figure 5(b) is magnified into Figures 5(o)5(y), in which Figure 5(o) is extracted from the source image as a ground truth image and Figures 5(p)5(y) are extracted from fused images. It is obvious that fused images with pyramids methods (Figures 5(p)5(s)) show severe distortion. DWT method is better but contains lots of vertical light beams which do not exist in DTCWT method. DTCWT1 to DTCWT4 contain some degree of light dispersion but DTCWT5 shows perfect performance.

Figure 5: Source images and fused images of sequence 1 for PCB. (a) Tested PCB. (b) Source image 1. (c) Source image 2. (d) Source image 3. (e)–(n) Fused images with various fusion methods. (o) The magnified ground truth image. (p)–(y) The magnified fused images with various fusion methods.
Figure 6: Source image sequence 2 for PCB.
Figure 7: Source image sequence 3 for PCB.

In objective evaluation, we simply use PSNR and RMSE as our quantitative metrics since MI could not be used because each experiment has more than two source images. As it is impossible to obtain a completely clear image, we cut clear parts of source images as the ground truth image as shown in the red blocks in Figures 57. The fused images are also cropped at the same position for objective evaluation.

Values of PSNR and RMSE of these areas are shown in Tables 5 and 6 and the best values are marked in bold. It can be seen that DTCWT4 and DTCWT5 own the best performance when evaluated with PSNR while RAT, DTCWT3, DTCWT4, and DTCWT5 are better in RMSE evaluation. Generally, the results appear to be a good match to that in subjective evaluation. We can see that the proposed methods perform better. Fused images of the proposed methods reserve more details from source images and thus are clearer than other methods with less distortion. Using the proposed DTCWT fusion methods, we can obtain totally clear images of PCB for components failure detection, solder paste detection, and other applications.

Table 5: Performance comparison of the PCB images with PSNR.
Table 6: Performance comparison of the PCB images with RMSE.

For the second cutting tool experiment, image sequences for a worn external turning tool are shown in Figures 810 with the turning tool shown in Figure 8(a) and the inspected area marked in red blocks. Due to its worn surface, the turning tool seems to be partially focused and partially defocused in one image. We analyze the fusion results both subjectively and objectively, where fused images are not presented due to space reasons. The clear parts from different source images are cut as the ground truth images, as shown in the red blocks in Figures 810.

Figure 8: Source image sequence 1 for external turning tool.
Figure 9: Source image sequence 2 for external turning tool.
Figure 10: Source image sequence 3 for external turning tool.

PSNR and RMSE values of the fusion results are shown in Tables 7 and 8. We can draw the conclusion that DTCWT based methods also have outstanding performances in fusion of turning tool images.

Table 7: Performance comparison of the turning tool images with PSNR.
Table 8: Performance comparison of the turning tool images with RMSE.
Table 9: Calculating time of the standard images (unit: s).

6. Time Consumption

This part discusses the calculation time of the above experiments. From Tables 9 and 10, it can be concluded that calculating processes using pyramids algorithm cost very little time while the DWT algorithm costs a little more time, and the proposed Q-shift DTCWT methods cost the longest time. It seems that different fusion rules utilized in this paper have little to do with the costing time.

Table 10: Calculating time of PCB images and external turning tool (ETT.) images (unit: s).

7. Conclusion

This paper provides an effective method for fusing various images by using Q-shift DTCWT. Since DTCWT is approximately shift invariant and has six directions: (±15°, ±45°, ±75°), more than in DWT, it could preserve more detail information and edge information of source images, and the Q-shift resolution simplifies its filter construction process. Since fusion rule is another significant factor in image fusion, five different fusion rules are presented in this article and evaluated both visually and objectively. From the plenty of experimental data, we can draw the conclusion that Q-shift DTCWT methods are a little better than some other multiresolution fusion methods (using the same fusion rules), but, with low frequency regional based fusion rules or both low frequency and high frequency regional based fusion rules, the DTCWT methods have outstanding performances. The proposed methods are used on microscopic image fusion including PCB and worn external turning tool, and the results turn out to be consistent with the ideal experiments. The proposed methods were proven to cost more time than pyramids algorithm and the DWT algorithm, and it seems that using different fusion rules will not add the costing time. Future work will be done for founding better fusion rules and discussing whether different fusion rules are appropriate for different situations.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research work was supported by the National Key Technology Support Program of China (Grant no. 2015BAF10B01) and Science and Technology Commission of Shanghai Municipality (Grants nos. 15111104002 and 15111106302).

References

  1. X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Information Fusion, vol. 22, pp. 105–118, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Information Fusion, vol. 4, no. 4, pp. 259–280, 2003. View at Publisher · View at Google Scholar · View at Scopus
  3. D. L. Hall and J. Llinas, “An introduction to multisensor data fusion,” Proceedings of the IEEE, vol. 85, no. 1, pp. 6–23, 1997. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion, vol. 24, pp. 147–164, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. S. T. Li, B. Yang, and J. W. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Information Fusion, vol. 12, no. 2, pp. 74–84, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. G. K. Matsopoulos and S. Marshall, “Application of morphological pyramids: fusion of MR and CT phantoms,” Journal of Visual Communication and Image Representation, vol. 6, no. 2, pp. 196–207, 1995. View at Publisher · View at Google Scholar · View at Scopus
  7. Q. G. Miao and B. S. Wang, “Multi-sensor image fusion based on improved Laplacian pyramid transform,” Acta Optica Sinica, vol. 27, no. 9, pp. 1605–1610, 2007. View at Google Scholar · View at Scopus
  8. I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Processing Magazine, vol. 22, no. 6, pp. 123–151, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864–2875, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Yin, S. Li, and L. Fang, “Simultaneous image fusion and super-resolution using sparse representation,” Information Fusion, vol. 14, no. 3, pp. 229–240, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. H. W. Di and X. F. Liu, “Image fusion quality assessment based on structural similarity,” Acta Photonica Sinica, vol. 35, no. 5, pp. 766–771, 2006. View at Google Scholar · View at Scopus
  13. F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. Abd El-Samie, “Curvelet fusion of MR and CT images,” Progress in Electromagnetics Research C, vol. 3, pp. 215–224, 2008. View at Publisher · View at Google Scholar
  14. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Optics Express, vol. 18, no. 21, pp. 21757–21769, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. G. C. Ren and L. Shi, “Medical image fusion algorithm based on 2v-SVM and consistency checking,” Computer Engineering and Applications, vol. 46, no. 13, pp. 199–201, 2010. View at Google Scholar
  16. Y. Yang, S. Y. Huang, J. F. Gao, and Z. Qian, “Multi-focus image fusion using an effective discrete wavelet transform based algorithm,” Measurement Science Review, vol. 14, no. 2, pp. 102–108, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Singh, R. Srivastava, O. Prakash, and A. Khare, “Multimodal medical image fusion in dual tree complex wavelet transform domain using maximum and average fusion rules,” Journal of Medical Imaging and Health Informatics, vol. 2, no. 2, pp. 168–173, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. X. Li and X. Zhan, “A new EPMA image fusion algorithm based on contourlet-lifting wavelet transform and regional variance,” Journal of Software, vol. 5, no. 11, pp. 1200–1207, 2010. View at Google Scholar
  19. S. Wei, K. Wang, G. L. Yuan et al., “A multi-focus image fusion algorithm in the complex wavelet domain,” Journal of Image and Graphics, vol. 13, no. 5, pp. 951–957, 2008. View at Google Scholar
  20. V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Systems with Applications, vol. 37, no. 12, pp. 8861–8870, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. P. Balasubramaniam and V. P. Ananthi, “Image fusion using intuitionistic fuzzy sets,” Information Fusion, vol. 20, no. 1, pp. 21–30, 2014. View at Publisher · View at Google Scholar · View at Scopus