Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 340675, 14 pages
http://dx.doi.org/10.1155/2015/340675
Research Article

Multifocus Image Fusion Using Biogeography-Based Optimization

1School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu 611731, China
2School of Computer Science & Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
3School of Engineering, Brown University, Providence, RI 02912, USA

Received 11 October 2014; Revised 4 February 2015; Accepted 7 February 2015

Academic Editor: George S. Dulikravich

Copyright © 2015 Ping Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

For multifocus image fusion in spatial domain, sharper blocks from different source images are selected to fuse a new image. Block size significantly affects the fusion results and a fixed block size is not applicable in various multifocus images. In this paper, a novel multifocus image fusion algorithm using biogeography-based optimization is proposed to obtain the optimal block size. The sharper blocks of each source image are first selected by sum modified Laplacian and morphological filter to contain an initial fused image. Then, the proposed algorithm uses the migration and mutation operation of biogeography-based optimization to search the optimal block size according to the fitness function in respect of spatial frequency. The chaotic search is adopted during iteration to improve optimization precision. The final fused image is constructed based on the optimal block size. Experimental results demonstrate that the proposed algorithm has good quantitative and visual evaluations.

1. Introduction

Optical lenses with long focal lengths often suffer from the problem of limited depth of field. It is impossible to get an image that contains all relevant objects in focus. The objects only on the focus plane are sharpness and other objects in front of or behind the focus plane are blurred [1]. Multifocus fusion method which synthesizes multiple images of the same view point under different focal settings can be used to extend depth of field and obtain an all-in focus image. The fused image has more useful information of the view point and is more suitable for many applications than any individual images. Multifocus image fusion technology has played important roles in many fields such as target recognition, remote sensing, medical diagnosis, and military application [2].

Many multifocus fusion algorithms have been proposed in recent years. Basically, these fusion algorithms can be categorized into two groups: spatial domain fusion and transform domain fusion [3]. For spatial domain fusion, a new image is fused by directly selecting different regions from source images. Firstly, source images are divided into nonoverlapping blocks. Then, the sharpness values of blocks are calculated based on different sharpness measure methods. Finally, the sharper blocks from different source images are selected to fuse a new image. The common algorithms in spatial domain include average, variance, energy of image gradient (EOG), sum modified Laplacian (SML), and spatial frequency (SF) [4]. Recently, many new algorithms in spatial domain have been proposed to improve efficiency, such as artificial neural network (ANN) [5], pulse coupled neural network (PCNN) [6], independent component analysis (ICA) [7], robust principal component analysis (RPCA) [8], and neighbor distance [9]. For transform domain fusion, a new image is generated with certain frequency transforms. Firstly, source images are converted into a transform domain to obtain the corresponding transform coefficients. Then, the transform coefficients are integrated together based on different fusion rules. Finally, the fused image is constructed by applying the inverse transform. The common algorithms in transform domain are based on pyramid transform, such as Laplacian pyramid (LAP), gradient pyramid (GRP), ratio of low-pass pyramid (RAP), and morphological pyramid (ROP) [10]. Some fusion algorithms based on wavelet transform are proposed which are generally superior to the fusion algorithms based on pyramid transform, such as discrete wavelet transform (DWT) [11], stationary wavelet transform (SWT) [12], and dual-tree complex wavelet transform (DTCWT) [13]. Recently, many new multiscale multiresolution transform algorithms have been widely proposed in image fusion, such as curvelet transform [14], contourlet transform [15], nonsubsampled contourlet transform (NSCT) [16], and nonsubsampled shearlet transform (NSST) [17].

The transform domain algorithms have no block artifact, but they usually are complicated and time-consuming to implement. And the multiscale multiresolution transform algorithms are generally shift-variant and sensitive to noise. The spatial domain algorithms are simple to implement and have low complexity [18]. However, block artifacts inevitably exist because the block size is fixed in these algorithms. If the block size is too large, there are both in-focus part and out-of-focus part in the same block. The block’s sharpness may be incorrect and the out-of-focus part may be selected as the part of the fused image when considering the integrity of the segmented part. If the block size is too small, the fused result is sensitive to noise and the computational complexity is too high. Obviously, a fixed block size may not be applicable in various multifocus images.

To solve the fixed block size problem, some image fusion algorithms based on optimization are proposed, such as genetic algorithm (GA) [19, 20], particle swarm optimization (PSO) [21], and differential evolution (DE) [22]. These fusion algorithms utilize the global optimization characteristics to obtain the optimal block size, which effectively suppress the block artifact and increase the performance of the fused images. However, the efficiency of these fusion algorithms depends largely on the performance of the optimization algorithms. GA, PSO, and DE are not so satisfactory because of low convergence rate and low optimization accuracy.

A novel multifocus image fusion algorithm using biogeography-based optimization is proposed in this paper. Biogeography-based optimization (BBO) [23] is a new swarm intelligence optimization, which is proposed by Dr. Simon in 2008. BBO obtains the global optimum through its migration mechanisms and mutation operation. BBO has fast convergence rate and high search precision compared with GA, PSO, and DE [23]. These advantages make BBO solve more effectively complex optimization problem. It has been applied to image and video processing, such as image classification [24], image matching [25], image segmentation [26], image enhancement [27], and motion estimation for video coding [28]. In this paper, the proposed algorithm utilizes the biogeography-based optimization technique to find the best block size. Moreover, chaotic search is embedded to improve optimization accuracy. Experiments on various multifocus images demonstrate that the proposed algorithm has good performance in terms of quantitative and visual evaluations.

The rest of this paper is organized as follows. In Section 2, the biogeography-based optimization is briefly reviewed. Section 3 describes the proposed image fusion algorithm in detail. The experiment results and discussions are presented in Section 4. The conclusions are given in Section 5.

2. Biogeography-Based Optimization

Dr. Simon proposed biogeography-based optimization (BBO) in 2008 [23]. It is based on the mathematics of biogeography, which describes how species migrate from one island to another, how new species arise, and how species become extinct. BBO, which is similar to GA, PSO, and DE, is a stochastic algorithm to solve optimization problem. It has two important operations, one is migration and the other one is mutation. Species among neighboring islands share their information through migration. Individual species improve their diversity through mutation. Global optimum can be effectively obtained with migration and mutation. BBO has good performance because of its fast convergence rate and high search precision [23].

2.1. Migration

In BBO, each solution of optimization problem is regarded as an island. The feature of each solution is regarded as a suitability index variable (SIV) and the fitness value of each solution is regarded as its habitat suitability index (HSI). The higher the HIS of an island, the better the performance of the optimization problem. High HSI and low HSI use the emigration and immigration rates of each solution to probabilistically share information between islands, as in Figure 1.

Figure 1: Island migration rates versus HSI.

The immigration rate and the emigration rate are the functions of the number of species in the island where is the maximum emigration rate, is the maximum immigration rate, is the number of species of the th individual, and is the maximum number of species. For simplicity, here, assume ; that is, .

The basic step of migration is as follows. Firstly, calculate the HSI values of all islands and sort them in descending order. Second, select one island that is needed to immigrate based on immigration rate and choose its adjacent islands based on emigration rate. Then, randomly select SIV value from the adjacent islands to replace the SIV value of that island. Recalculate the HSI values of all islands. The island with highest HSI value is the optimal solution.

2.2. Mutation

An island’s HSI may change suddenly due to apparently random events such as disease and natural catastrophes. In BBO, this phenomenon is called mutation. The mutation rates is determined as follows: where is a mutation parameter and , is probability of species. The relationship between and migration rate is shown as follows:

This mutation approach makes low HSI solutions likely to mutate, which gives them a chance of improving. It also makes high HSI solutions likely to mutate, which gives them a chance of improving more than they already had. Thus, the BBO mutation strategy tends to increase diversity of islands.

2.3. Elitism Strategy

BBO incorporate elitism strategy in order to save the features of the habitat that has the best solution in the iterative process as with GA, PSO, and DE. This prevents the best solutions from being corrupted by immigration or being ruined by mutation. So even if the habitat with highest HSI is destroyed, BBO has saved it and can revert back to it if needed. The proposed algorithm in this paper retains two higher islands as elites. These elites are kept from this generation to the next generation.

3. Multifocus Image Fusion Using Biogeography-Based Optimization

In this paper, a novel multifocus image fusion algorithm using biogeography-based optimization is proposed. Considering the characteristic of image fusion, the island, SIV, and HSI of BBO is regarded as block size, width and height of block size, and image quality assessment, respectively. So, the dimension of optimization problem is only two, the width and height of block size. Here are the key steps of the proposed algorithm. Firstly, it selects the random block size as initial islands for utilizing the global random characteristics of BBO. Then, it calculates the HSI value of each block size. During iteration process, the width and height of the block size are updated through the migration and mutation operations. The iteration stops if the termination condition is met. Finally, the block size with the highest HSI is the optimal block size. The details of the multifocus image fusion using BBO are as follows.

3.1. HSI Function Selection

Spatial frequency [4] can measure the overall activity level in images and reflect the ability to express small details in images. The larger the value of spatial frequency in fused image, the better the performance of fusion. In the image fusion algorithms based on GA, PSO, and DE, spatial frequency is chosen as the fitness function [2022]. It is a good fitness function to assess the fused image quality. In this paper, we also choose the spatial frequency as the HSI function of BBO

Spatial frequency of size image is defined as follows: where are the row and column gradients, respectively. is the pixel of the image.

3.2. Sharpness Measurement

For multifocus image, the sharpness areas of different source images are selected to construct a fused image. How to define the sharpness areas is very important. There are many typical sharpness measurements, such as EOG, SF, and SML [18]. They measure the variation of pixels of blocks in source images to determine the sharpness. The blocks with greater values are considered as in-focus blocks. Experimental results of Huang and Jing [18] show that SML method provides better performance than EOG and SF. It differs from the usual Laplacian operator in that the absolute values of the partial second derivatives are summed instead of their actual values. The SML is defined as follows: where is the pixel of the image. and is the window size. is the modified Laplacian given as follows. Here, is a variable that is used to set the distance between the central pixel and the pixels are used to compute the second order derivative. In this paper, a value of is found to produce the best results

Source images and are divided into nonoverlapped blocks with the size of . Refer to the th block of source images and images by and , respectively. The focus region according to SML is defined as follows:

However, there are always some small holes in focus region in practice. Here, the morphological filter is used to smooth blocks and remove the holes and gulfs defects, which is shown as follows:

The structure element is “diamond” matrix. and is the opening and closing operation. The focus region according to morphological operation is defined as follows:

So, the final focus region is defined as

The focus region is defined same as . The blocks of fusion image are combined with and in focused regions

3.3. Chaotic Search

To further enhance the optimization accuracy of BBO, the modified chaotic search algorithm is adopted during each late period of iteration. Chaotic search algorithm makes full use of the ergodicity, randomness, and regularity of chaotic motion. It obtains the optimal solution by traversing every state in a small area without repetition [29]. Therefore, further chaotic search can generate several neighborhood islands based on the island with the highest HSI value to improve the performance of the proposed algorithm.

The basic process is that the chaotic variables are firstly generated based on the current optimal solution. Then, the fitness value of each variable is calculated and the optimal solution of these chaotic variables is compared with the previous global optimal solution. Better solution is chosen as the current global optimal solution. This paper adopts typical logistic mapping, as shown in (14), where , and   is the maximum of iteration, which is set as 5. is the control parameter. The system of function in (14) with , , and is a chaotic system The chaotic search is as follows.

Step 1. The current optimal solution (the highest HSI point of current th iteration) is mapped to domain of logistic formula (15). The search area is confined in

Step 2. The chaotic variable is generated by logistic function in (14) and then returned by inverse mapping in term of the following equations:

Each fitness value of each solution of is calculated and the fitness values of the optimal solution of them with are compared. If the former is better than , it replaces as global optimal solution; otherwise, return

3.4. Fusion Algorithm Description

The multifocus image fusion algorithm using biogeography-based optimization is summarized as follows.

Step 1. Define some initial parameters, such as the maximum number of species and maximum iteration number .

Step 2. Randomly generate initial islands, . Each island contains two parameters: . and are width and height of th block size, respectively. Divide source images into nonoverlapping blocks according these initial islands.

Step 3. Calculate the sharpness value of blocks to define focus regions and use these focus regions to construct a fused image. Compute the HSI functions set of the fused image set .

Step 4. BBO and chaotic searches are executed to update the state of islands as follows.(1)Calculate the immigration rate , the emigration rate , and mutation rates of each island.(2)Migration operation is as follows: the parameters of will be changed to generate new parameters based on immigration rate and emigration rate.(3)Mutation operation is as follows: if the mutation rate is not zero, mutate the island with the highest HSI value.(4)Chaotic search is as follows: generate chaotic variables based on current optimal solution and obtain new solutions of . Update the global optimal solution.

Step 5. Repeat Steps  3 and 4 until maximum iteration number is reached. Finally, the optimal block size is obtained based on the highest HSI to construct the optimal fused image.

4. Experiments

The proposed algorithm is called CSBBO. In order to verify the performance of CSBBO algorithm, such classical algorithms include the pulse coupled neural network (PCNN) [6] in spatial domain, nonsubsampled contourlet transform (NSCT) [16], and nonsubsampled shearlet transform (NSST) [17] in frequency domain, genetic algorithm (GA) [20], particle swarm optimization (PSO) [21], and differential evolution (DE) [22] based on intelligence optimization.

Two kinds of experiment results, with a reference image and without a reference image, are shown in this paper. Different objective evaluation is used in these two kinds of experiments. A reference image is regarded as a ground truth image and various algorithms are assessed using the difference or structure similarity between the fused image and reference image. Without reference image, various algorithms are assessed using the information which transfers from the source images to the fused image. In the implementation, it is assumed that two multifocus source images are registered before the image fusion process. All multifocus source images and reference images are downloaded from http://www.imagefusion.org/.

4.1. Fusion with Reference Image

The parameters of various algorithms are set as follows. For PCNN, , , and . The maximal iterative number is , , and [6]. For NSTC and NSST, the decomposition level is four and the directions from coarse scale to finer scale are [15, 17]. The low-pass subband coefficients and the band-pass subband coefficients are simply merged by “averaging” scheme and the “absolute maximum choosing” scheme, respectively. For GA, and [20]. For PSO, and [21]. For DE, the version is rand/1/bin, , and [22].

With the reference image, root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity metric (SSIM) [30] are used as quantitative assessment metrics to compare various fusion algorithms. RMSE and PSNR are commonly used to evaluate the difference between the fused image and the reference image. SSIM commonly evaluates the structure similarity between the fused image and reference image. The higher PSNR and SSIM values and the less RMSE value indicate better fusion results. They are defined as where and are the pixel values of the reference image and the fused image, respectively. The image size is . , , , , is mean, variance, and covariance, respectively. , , are constants closing to zero.

It is impossible to guarantee that an optimal solution could be reached after a number of iterations because of the stochastic behavior of GA, PSO, DE, and BBO [2022]. Moreover, the optimal solution almost could not be improved in late of iterations because of the premature behavior of GA, PSO, DE, and BBO. Too much iteration will result in very high computational complexity. The experimental result in [22] has shown that the performances of image fusion based on GA and DE cannot be significantly improved after 20 iterations. There is the same trend of the image fusion based on PSO, BBO, and CSBBO in our experiments. Therefore, the maximum iteration number of all intelligence optimization algorithms is set to 20 in order to reduce complexity.

In order to analyze the parameters of CSBBO, population number, and maximum mutation parameter, the experiment of “Pepper” image fusion is given in Table 1. Three images of “Pepper” are shown as in Figure 2. Figure 2(a) is the reference image in focus everywhere. Figures 2(b) and 2(c) focus on center and surround, respectively. All images have 512 × 512 pixels with 256 level gray scales. Considering the randomness of four intelligence optimization algorithms, experiments are repeated 30 times and the average results are saved. From Table 1, we can see that the difference between RMSE values obtained for different population number (PN) is not very high. Then, PN is set to 10 taking into account the efficiency and complexity. And the most suitable mutation parameter that gives the best RMSE value is 0.2 when PN is 10.

Table 1: Average RMSE results obtained by CSBBO for Pepper images fusion.
Figure 2: Pepper images: (a) reference image; (b) focus on center; (c) focus on surround.

The fused results of various algorithms are shown in Figure 3. The value inside the parentheses represents the optimal block size. Each algorithm can obtain a good fused image, which is almost the same as the reference image. For clearer comparison, the difference results between the fused image and the reference image are shown in Figure 4. Ideally, the difference should be zero. So, less residual features in the difference results means better performance of the fusion algorithm. It can be observed that the difference images obtained by PCNN, NSCT, and NSST have more residual features than GA, PSO, DE, BBO, and CSBBO. This indicates that these fusion algorithms based on block optimization of intelligent techniques can improve the performance of image fusion. Moreover, we can see that the difference image obtained by CSBBO has the least residue among them.

Figure 3: Fusion results for Pepper images: (a) PCNN; (b) NSCT; (c) NSST; (d) GA; (e) PSO; (f) DE; (g) BBO; (h) CSBBO.
Figure 4: Difference results for Pepper images: (a)~(h) is the difference between Figure 2(a) and Figures 3(a)~3(h), respectively.

Table 2 shows the quantitative evaluation results of various image fusion algorithms with the reference image. As we can see, GA, PSO, DE, BBO, and CSBBO based on intelligent optimization are able to obtain higher PSNR, SSIM values and less RMSE value. Their performances are superior to PCNN, NSCT, and NSST. Among them, the performance of BBO is slightly better than GA, PSO, and DE. And CSBBO has the highest PSNR and SSIM values and the least RMSE value, which can obtain more useful features from the source images and has better performance than other fusion algorithms.

Table 2: Evaluation of various image fusion algorithms with the reference image.
4.2. Fusion without Reference Image

Natural digital camera images have no reference images, which contain multiple objects in focus and defocus parts because of their location at different distance from the camera. We choose two gray images, Plane (160 × 160), Clock (256 × 256), and two color images, Flower (480 × 480), Book (512 × 512). They all have two source images with different focus parts, as shown in Figure 5, respectively.

Figure 5: Source images: (a) Plane left focus; (b) Plane right focus; (c) Clock left focus; (d) Clock right focus. (e) Flower left focus; (f) Flower right focus; (g) Book left focus; (h) Book right focus.

For these four image pairs, the fused results of various algorithms are presented in Figures 6, 8, 10, and 12, respectively. To make better comparisons, the difference results of various algorithms are presented in Figures 7, 9, 11, and 13, respectively.

Figure 6: Fusion results for Plane images: (a) PCNN; (b) NSCT; (c) NSST; (d) GA; (e) PSO; (f) DE; (g) BBO; (h) CSBBO.
Figure 7: Difference results for Plane images: (a)~(h) is the difference between Figure 5(b) and Figures 6(a)~6(h), respectively.
Figure 8: Fusion results for Clock images: (a) PCNN; (b) NSCT; (c) NSST; (d) GA; (e) PSO; (f) DE; (g) BBO; (h) CSBBO.
Figure 9: Difference results for Clock images: (a)~(h) is the difference between Figure 5(d) and Figures 8(a)~8(h), respectively.
Figure 10: Fusion results for Flower images: (a) PCNN; (b) NSCT; (c) NSST; (d) GA; (e) PSO; (f) DE; (g) BBO; (h) CSBBO.
Figure 11: Difference results for Flower images: (a)~(h) is the difference between Figure 5(f) and Figures 10(a)~10(h), respectively.
Figure 12: Fusion results for Book images: (a) PCNN; (b) NSCT; (c) NSST; (d) GA; (e) PSO; (f) DE; (g) BBO; (h) CSBBO.
Figure 13: Difference results for Book images: (a)~(h) is the difference between Figure 5(h) and Figures 12(a)~12(h), respectively.

From Figures 6 and 8 for gray image Plane and Clock, we can see that each algorithm can get a clear fused image except for PCNN. The fused image obtained by PCNN has no good visual effect because there are some blurred blocks in focus area, such as the front plane and the clock surface. Moreover, the fused image obtained by PCNN does not maintain continuous edge features, such as the book edge and the clock edge in Figure 8. The fused images obtained by NSCT, NSST, GA, PSO, DE, BBO, and CSBBO have almost no difference for visual effects.

However, we can distinguish the fusion performance of these algorithms from the difference results from Figures 7 and 9. It can be observed that the difference images obtained by NSCT and NSST have more residual features than GA, PSO, DE, BBO, and CSBBO. There is large residual information of front plane obtained by NSCT and NSST in Figures 7(b) and 7(c). The numbers of clock surface are obviously remained in Figures 9(b) and 9(c). From Figures 7(d)~7(g) and Figures 9(d)~9(g), it can be found that the difference results obtained by GA, PSO, DE, BBO, and CSBBO are better than other algorithms. Among them, it is obvious that CSBBO has the least residual features. The difference images obtained by GA, PSO, DE, and BBO have more residual information especially in the edge parts of the focus area, such as Figures 9(d)~9(f), and they also have more block artifacts in Figures 7(d)~7(f). Figures 6(g), 8(g), 7(g), and 9(g) show that CSBBO can detect the focus region accurately and transfer more useful features from source images to the fused image.

From Figures 10 and 12 for color image Flower and Book, we can see the performance of each algorithm in vision. The fused image obtained by PCNN has distortion brightness and blurred edges, such as the switch and bricks in Figure 10. The fused images obtained by NSCT and NSST preserve a good brightness and edge features, but they still have some obvious blurred area, such as the words at the top right corner in Figure 12. The fused images obtained by GA, PSO, DE, BBO, and CSBBO improve the visual effects because these algorithms depress efficiently the blurred vision, such as the words at the top right corner in these images, which are clearer in Figure 12. Moreover, we can distinguish the fusion performance of these algorithms from the difference results from Figures 11 and 13. The difference image obtained by PCNN has the most residual features in Figures 11(a) and 13(a). And the difference images obtained by NSCT and NSST have more residual features than GA, PSO, DE, BBO, and CSBBO in Figures 11(b), 11(c), 13(b), and 13(c). Compared with GA, PSO, DE, and BBO, CSBBO has the least residual features. From the above-mentioned results, it can be seen that CSBBO can obtain clearer and more nature fused image than other algorithms.

For objective assessment without the reference image, natural multifocus images should be evaluated by nonreference fusion metrics. In this paper, the spatial frequency (SF) [4], feature mutual information (FMI) [31], the edge information based metric [32], and similarity based on block metric [33] are used to evaluate the fusion performance of various fusion algorithms. SF evaluates the ability to express small details in the fused images, defined as formula (5). FMI calculates the amount of information conducted from the source images to the fused image, defined as where and are the two source images. is the fused image. and are joint probability distribution. and are probability distribution.

measures the relative amount of edge information that is transferred from the source images into the fused image. It is defined as where and are the edge preservation values for two source images. , , , and are the Sobel edge strength and orientation preservation value at location , respectively. and are weighted coefficients.

is based on the universal image quality index and uses the similarity between blocks of pixels in source images and the fused image as the weighting factors for the metrics. It is defined as where is the analysis window and is the family of all windows.    is the image quality index. is the similarity in spatial domain between the input image and the fused image.

For all the nonreference fusion metrics, the larger the value is, the better the fusion performance is. Table 3 shows the objective assessments of the fused images obtained by various fusion algorithms without the reference image. From this table, it can be observed that CSBBO outperforms other algorithms in terms of SF, FMI, and for all images. For , PCNN performs a little bit better than CSBBO for Plane image, but there are obvious defects in the fused image obtained by PCNN in Figure 6(a). Based on the analysis above, it can be concluded that CSBBO has good performance in terms of quantitative and visual evaluations.

Table 3: Evaluation of various image fusion algorithms without the reference image.

5. Conclusion

For spatial domain fusion, the fixed block size of source images will result in block artifacts in multifocus image fusion. To solve this problem, a novel multifocus image fusion algorithm using biogeography-based optimization is proposed in this paper. An optimized block size of each source image is obtained by BBO and chaotic search. The final fused image consisted of the optimal block size, not a fixed block size. The experiment results with or without a reference image show that CSBBO outperforms the traditional PCNN, NSCT, and NSST algorithms. Furthermore, among image fusion algorithms based on intelligence optimization, CSBBO is superior to GA, PSO, DE, and BBO in terms of visual analysis and quantitative evaluation. In the future, it is worthy to further investigate other types of fitness functions that may affect fusion performance. And it is possible to apply the proposed algorithm to other image fusions of different types of images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the editor and anonymous reviewers. This research is supported by the National Natural Science Foundation of China (no. 61308102), Specialized Research Fund for the Doctoral Program of Higher Education (no. 20130185120038), China Postdoctoral Science Foundation (no. 2013M531946), Science and Technology Planning Project of Sichuan Province (nos. 2014GZ0005 and 2014JY0228), and Fundamental Research Funds for the Central Universities, China (no. ZYGX2013J059).

References

  1. A. Ardeshir Goshtasby and S. Nikolov, “Image fusion: advances in the state of the art,” Information Fusion, vol. 8, no. 2, pp. 114–118, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. A. P. James and B. V. Dasarathy, “Medical image fusion: a survey of the state of the art,” Information Fusion, vol. 19, no. 1, pp. 4–19, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Anish and T. J. Jebaseeli, “A survey on multi-focus image fusion methods,” International Journal of Advanced Research in Computer Engineering & Technology, vol. 1, no. 8, pp. 319–324, 2012. View at Google Scholar
  4. S. T. Li, J. T. Kwok, and Y. N. Wang, “Combination of images with diverse focuses using the spatial frequency,” Information Fusion, vol. 2, no. 3, pp. 169–176, 2001. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neural networks,” Pattern Recognition Letters, vol. 23, no. 8, pp. 985–997, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognition, vol. 43, no. 6, pp. 2003–2016, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sensors Journal, vol. 7, no. 5, pp. 743–750, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. T. Wan, C. C. Zhu, and Z. C. Qin, “Multifocus image fusion based on robust principal component analysis,” Pattern Recognition Letters, vol. 34, no. 9, pp. 1001–1008, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. H. J. Zhao, Z. W. Shang, Y. Y. Tang, and B. Fang, “Multi-focus image fusion based on the neighbor distance,” Pattern Recognition, vol. 46, no. 3, pp. 1002–1011, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Toet, L. J. van Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Optical Engineering, vol. 28, no. 7, pp. 789–792, 1989. View at Google Scholar · View at Scopus
  11. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Beaulieu, S. Faucher, and L. Gagnon, “Multi-spectral image resolution refinement using stationary wavelet transform,” in Proceedings of the IEEE International Conference on Geoscience and Remote Sensing, pp. 4032–4034, July 2003. View at Scopus
  13. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Information Fusion, vol. 8, no. 2, pp. 119–130, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Information Fusion, vol. 8, no. 2, pp. 143–156, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new contourlet packet,” Information Fusion, vol. 11, no. 2, pp. 78–84, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. C. Fei and J.-P. Li, “Multi-focus image fusion based on nonsubsampled contourlet transform and multi-objective optimization,” in Proceedings of the International Conference on Wavelet Active Media Technology and Information Processing (ICWAMTIP '12), pp. 189–192, chn, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. Q.-G. Miao, C. Shi, P.-F. Xu, M. Yang, and Y.-B. Shi, “A novel algorithm of image fusion using shearlets,” Optics Communications, vol. 284, no. 6, pp. 1540–1547, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognition Letters, vol. 28, no. 4, pp. 493–500, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. X. M. Zhang, J. Q. Han, and P. Liu, “Restoration and fusion optimization scheme of multifocus image using genetic search strategies,” Optica Applicata, vol. 35, no. 4, pp. 927–942, 2005. View at Google Scholar
  20. J. Kong, K. Zheng, J. Zhang, and X. Feng, “Multi-focus image fusion using spatial frequency and genetic algorithm,” International Journal of Computer Science and Network Security, vol. 8, no. 2, pp. 220–224, 2008. View at Google Scholar
  21. X. M. Zhang, L. B. Sun, J. Han, and G. Chen, “An application of swarm intelligence binary particle swarm optimization algorithm to multi-focus image fusion,” Optica Applicata, vol. 40, no. 4, pp. 949–964, 2010. View at Google Scholar · View at Scopus
  22. V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Systems with Applications, vol. 37, no. 12, pp. 8861–8870, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. V. K. Panchal, P. Singh, N. Kaur, and H. Kundra, “Biogeography based satellite image classification,” International Journal of Computer Science and Information Security, vol. 6, no. 2, pp. 269–274, 2009. View at Google Scholar
  25. X. H. Wang, H. B. Duan, and D. L. Luo, “Cauchy biogeography-based optimization based on lateral inhibition for image matching,” Optik, vol. 124, no. 22, pp. 5447–5453, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Chatterjee, P. Siarry, A. Nakib, and R. Blanc, “An improved biogeography based optimization approach for segmentation of human head CT-scan images employing fuzzy entropy,” Engineering Applications of Artificial Intelligence, vol. 25, no. 8, pp. 1698–1709, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. J. Jasper, S. B. Shaheema, and S. B. Shiny, “Natural image enhancement using a biogeography based optimization enhanced with blended migration operator,” Mathematical Problems in Engineering, vol. 2014, Article ID 232796, 11 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. P. Zhang, P. Wei, and H.-Y. Yu, “Biogeography-based optimisation search algorithm for block matching motion estimation,” IET Image Processing, vol. 6, no. 7, pp. 1014–1023, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. C.-G. Fei and Z.-Z. Han, “Improved chaotic optimization algorithm,” Control Theory and Applications, vol. 23, no. 3, pp. 471–474, 2006. View at Google Scholar · View at Scopus
  30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Computers and Electrical Engineering, vol. 37, no. 5, pp. 744–756, 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000. View at Publisher · View at Google Scholar · View at Scopus
  33. N. Cvejic and A. Łoza, “A novel metric for performance evaluation of image fusion algorithms,” Transactions on Engineering Computing and Technology, vol. 7, pp. 80–85, 2005. View at Google Scholar