About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 414785, 11 pages
http://dx.doi.org/10.1155/2013/414785
Research Article

Robust Multisensor Image Matching Using Bayesian Estimated Mutual Information

College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, Hunan 410073, China

Received 24 May 2013; Accepted 26 August 2013

Academic Editor: Marco Perez-Cisneros

Copyright © 2013 Lurong Shen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Mutual information (MI) has been widely used in multisensor image matching, but it may lead to mismatch among images with messy background. However, additional prior information can be of great help in improving the matching performance. In this paper, a robust Bayesian estimated mutual information, named as BMI, for multisensor image matching is proposed. This method has been implemented by utilizing the gradient prior information, in which the prior is estimated by the kernel density estimate (KDE) method, and the likelihood is modeled according to the distance of orientations. To further improve the robustness, we restrict the matching within the regions where the corresponding pixels of template image are salient enough. Experiments on several groups of multisensor images show that the proposed method outperforms the standard MI in robustness and accuracy and is similar with Pluim’s method. However, our computation is far more cost saving.

1. Introduction

The matching of multisensor images is an important preprocessing step for many applications, such as navigation, data-fusion, and visualization tasks. Due to different physical characteristics of different sensors, the relationship between the intensities of the matching pixels is often complex. Visual features present in one sensor image may not appear in other ones, and vice versa; image contrast may differ from each other in the same regions; multiple intensities in one image may be mapped to a single intensity in other images, and vise versa. The uncorrelated intensities make the normal intensity-based matching methods ineffective; therefore, the matching of multisensor image becomes challenging in computer vision.

Previous works on multisensor image matching can be generally categorized into two classes [1]. The first class is to find some invariant representations among multisensor images, such as point feature [2], edge feature [3], contour feature [4] and edge orientation maps [5], then the problem can be solved by utilizing the traditional monosensor matching methods. Nevertheless, the feature extraction has been regarded as one of the most difficult problems in image processing, especially in multisensor cases, as the feature present in one image may not appear in other ones. Additionally, these methods usually omit most of the image details, which may lead to a wrong matching.

The second class is to use an invariant similarity measure of images. These methods are conducted directly in the intensity domain, and they have been successfully used in a variety of multisensor registrations. Within these methods, the images are treated as combinations of different sets. Each set stands for a separate gray level, assuming that the same component of the same object is consistent in one image, and the pixels of the same component with intensities near in one image correspond to the pixels with intensities near in the other image. In other words, there may be no pixel-to-pixel correspondence in intensity, but there do exist set-to-set correspondences between two images; namely, ignoring the intensity property, the topology of the corresponding sets must be similar between two images. The corresponding relationship can be well shown by the joint intensity scatter plot (JISP) or the joint histogram (JH) [6], and by measuring their dispersion one can quantify the matching degree. One of the most successful measures is the entropy. The lower the entropy is, the more tightly clustered the scatter plot will be and, hence, the closer the matching will be. Mutual information (MI) [7, 8] or normalized mutual information (NMI) [9] has been considered as the state-of-the-art matching frameworks of the entropy-based methods.

Nevertheless, the robustness of the MI-based matching methods is questionable. A mismatched result of the NMI between an infrared (IR) and a visible (VS) image is shown in Figure 1, in which statistical relationship between the images is decreased by the messy background. In addition, a “Grey stripe” registration also demonstrates the pitfall of NMI in [10].

fig1
Figure 1: A mismatching example of MI. (a) An infrared template image, (b) a visible reference image and the matching result, and (c) the matching result of our method. There are many background textures around the target (car) of the infrared image, but only a few in the VS image, which decrease the statistical relationship and lead to a mismatching when using the NMI method. The matching result is shown by superimposing the Sobel edges of the template image directly on the reference image.

Recently, many approaches have been proposed to improve the MI by incorporating additional spatial information. Studholme et al. [11] introduce a prior of labeled connecting regions as an additional information channel to the MI. Guo and Lu [12] adopt the gradient vector flow (GVF) to enhance the original intensity images before using the MI, which improves the success rate. Kim et al. [13] propose a new method based on a 3D joint histogram (JH), which integrates the intensity information and the edge orientation information. The method provides more robustness. Lee et al. [14] extend [13] by using a modified GVF to extend the capture range of gradient-based MI. Wang and Tian [15] encode the images into 32 bins according to the gradient magnitude and orientation and then compute the MI directly on the encoded field, which improves the computation efficiency of the MI. Liu et al. [16] improved this method with an adaptive combination strategy of an intensity-based MI and a coded gradient-based MI. Fan et al. [17] propose a method that combines the spatial information through a feature-based selection mechanism, such as the Harris corner, and the JH is increased to 4D. Loeckx et al. [18] propose conditional mutual information (cMI) as a new similarity for image registration; cMI starts from a 3D joint histogram incorporating, besides the intensity dimensions, also a spatial dimension expressing the location of the joint intensity pair. Pluim et al. [19] modify the MI-based method by multiplying the MI with a term of the gradient. The gradient term not only seeks to align locations of high gradient magnitude but also aims for a similar orientation of the gradients at these locations. This method is referred to as GNMI in this paper.

These improvements for the MI-based methods strongly depend on the invariant characteristics mentioned in the first class, for example, the gradient information of magnitude and orientation, as well as the corner features. Actually, since they take full advantages of the two classes, we classify them as the third class and regard this additional information as prior information in this paper.

It should be pointed out that some methods which introduce the prior into MI are problematic. For example, in [15], the pixels are encoded according to their gradient magnitude and orientation. The intensities in the standard MI are substituted with gradient codings. It is known that the gradient orientation appears to be parallel to each other in multisensor images, and they must be distributed along the diagonal of JH in the true matching. However, as the codings function similarly as the intensities of the standard MI in JH, and MI measures the dispersion of JH only, this method still gives a high correlation even though the compact bins are located far away from the diagonal of JH. Moreover, the gradient magnitudes participating in the coding are not so reliable, as the magnitudes vary a lot in multisensor images [13, 14]. Some better methods are presented in [13, 16, 19], in which they treat the measure of gradient vectors between multisensor images as a weight of the MI. However, they suffer from expensive computation.

In this paper, a new similarity measure that incorporates the prior information into MI by the Bayesian method is proposed. For simplicity, it is named as Bayesian MI (BMI) here. BMI is deduced by substituting classical probabilities in MI with the Bayesian probability, such as the prior, the likelihood, and the posterior. For example, the classical conditional probability can be regarded as a likelihood probability or a posterior probability in the Bayesian framework and through which we can introduce some subjective experiences in the estimation. To avoid the statistic of JH, the BMI is defined directly on each pixel pair, and the images can then be measured by the mean BMI of the valid matching pixel pairs.

Then, an implementation of the BMI is proposed based on the prior information of gradient. Since the magnitude information between multisensor images is not so reliable, we only concern whether their magnitudes are salient or not and treat the salient ones as the saliency. Generally, saliency of image contributes more in matching [20, 21], so we restrict the pixel pairs of matching within a mask where the pixels of the template image are salient.

In the implementation, the orientation of each pixel is defined firstly. The saliency pixels are defined as the absolute value of their orientations, while the nonsaliency ones are defined as zero. Secondly, the prior distribution is estimated based on the kernel density estimate (KDE) method, and the likelihood between orientations is modeled by the Gaussian function according to their distance. The closer the orientation is, the more similar the pixel pair will be. Finally, the sequential statistic BMI based on the gradient information is modeled as well.

At the end of this paper, experiments are conducted on six groups of images which are captured by different sensors, and the results suggest that the proposed method is more robust and more accurate than the standard NMI and is similar to Pluim’s method [19]. Moreover, our computation is far more cost saving.

The rest of this paper is outlined as follows. Section 2 briefly introduces the standard MI, and the proposed BMI is deduced by Bayesian formula. Based on the gradient prior information, the application of the BMI is shown in Section 3. In Section 4, some experimental results and discussions are presented. Finally, conclusions are drawn in Section 5.

2. Proposed Method

2.1. Mutual Information

The MI method is of general significance because it makes few assumptions on image intensities. It assumes neither linear correlation nor even functional correlation but only statistical dependence [10].

Given two images, and , one can define their joint probability (JP) by simply normalizing the JH. Let and denote the corresponding probabilities, which can be obtained directly from . The MI between and based on Shannon entropy is given by

Apart from (1), MI can also be described in many other forms, as summarized by [22]: where and are the Shannon entropy of images and , which indicate the amount of uncertainty about and , respectively. and denote the conditional entropy, indicating the uncertainty left in or when or is known. is the joint entropy, which indicates the total uncertainty of and . When is known, the conditional probability of is noted as ; it can be easily deduced from their JP by the classical probability theory.

In fact, can be regarded as the self-information of , and the entropy represents its mean. Similarly, the mutual information between and can be defined as and can be deemed as the mathematical expectation of .

2.2. Bayesian Mutual Information.

BMI is deduced directly from the MI by regarding the conditional probability in (2) as a posterior probability, which can be estimated according to the prior information based on the Bayesian method.

Let be the prior probability; then, according to the Bayesian formula, we have where and denote the likelihood probability and evidence probability, respectively. can be estimated through the joint of prior and likelihood by the total probability formula. Here, is used for distinguishing the Bayesian probabilities from the classical probability . Using (2) and (4), the BMI can be defined as

Similar to the MI, (5) needs the statistic of JP beforehand. For convenience, we refine (5) into a sequential statistic form. As , where is the total pixel number in the matching, is the Kronecker function, and are the pixel values of and at , respectively. Thus, (5) can be rewritten as where denotes the BMI of the pixel pair . Obviously, a higher BMI can be achieved from higher likelihood and lower evidence. Equation (6) is applicable regardless of whether and are discrete or continuous, while (5) is for the discrete form only.

BMI can be comprehended favourably using the communication theory when regarding , and as the message source, the channel, and the sink; respectively, BMI is just the evaluation of the correct rate after transmission [23].

It should be noticed that , , , and their logarithmic values can be calculated offline; moreover, as , the cost of our computation can be remarkably saved by utilizing the lookup-table (LUT) technique.

3. Implementation

In this section, we are about to implement the proposed BMI based on the prior information of the gradient. Actually, BMI is universal as long as the prior and the likelihood can be modeled, such as the intensity and the local statistic between monosensor matching, the gradient orientation and magnitude, and the edges but not the intensity of multisensor images.

As mentioned in the introduction, the saliency includes most of the information [24]; thus, it would contribute more in matching. In [21], Qin et al. extract the salient vector for each pixel using the principal axis analysis (PAA). The more similar the corresponding vectors are, the higher weight the pixel pairs will be in JP. In [20], Kim et al. extract the salient regions from expanding the edges; then the MI is calculated on those regions in which the images are both salient. This method enhances the statistical correlation between the IR and VS images.

In this study, we simply define the saliency as those pixels whose gradient magnitude are higher than a threshold, which is correlated to the mean of the full image. Different from Kim’s scheme, the proposed BMI is computed only on those pixel pairs in which the template image is salient, regardless of the reference image.

Let and denote the reference image and the template image, respectively, and denotes the subimage cropped from the top-left corner of , which has the same size as . Both of the two images are filtered by a Gaussian window to suppress the noise.

The realization of the BMI includes the gradient orientation mapping, the prior probability estimation, and the likelihood probability modeling

It should be noted that, in the following implementation, it is assumed that there is only translation between two images, so are the experiments. This assumption does not affect manifesting the effect of our method.

3.1. Gradient Orientation Mapping.

Let and be the gradient orientation maps of and , and is the coordinate of pixel. Taking for an example, it is defined as where is a function which returns the phase of vector , is its magnitude, and is the sign function, which reverses the vectors in the 3rd and 4th quadrant and limits the to . is the threshold which separates the gradient of salient structures from noises, and where a is an empirical percentage of the mean magnitude, is the total pixel number of . Practically, . Here, the fact that gradient orientations are parallel to each other between multisensor images is utilized. In other words, their orientations are either the same or opposite. When the gradient magnitude is small, is set to zero, and the pixels with horizontal orientation are defined as .

Let be the resolution angle; then we can get the discrete form of (7)

In this paper, the pixels of are named as zero-code pixels, and those of are named as nonzero-code pixels, and those nonzero-code pixels are defined as saliency. The saliency of forms the mask in which the matching is conducted, and we denote the set of the corresponding coordinations as .

3.2. Prior Probability Density Estimation

Normally, the prior probability can be defined by the histogram of . However, when the size of the template image is small (small sample size problem), the histogram will be incapable to character the probability precisely by the classical probability theory, which requires as many samples as possible. For this problem, kernel density estimate (KDE), or Parzen window method [25, 26], is a good solution. KDE has been widely used for nonparametric distribution estimation [27]. Therefore, the probability density of can be written as where is the kernel function and corresponds to the bandwidth. The larger is, the smoother will be. Constant makes .

Actually, the KDE method distributes each to its neighbors by a bounded with and makes more accurate and more robust than the discrete one. Frequently used kernel functions include Uniform, Triangle, Epanechnikov, and Normal [28]. In this study, the Normal Gaussian kernel function is employed as kernel function.

3.3. Likelihood Probability Density Model

Likelihood is a subjective probability in our work. It is a connection between the prior and the evidence. In this section, we sort the valid pixel pairs under the mask into two classes: nonzero-code to nonzero-code matching and nonzero-code to zero-code matching, as shown in Figure 2. Their likelihoods will be modeled, respectively.

414785.fig.002
Figure 2: Valid pixel pairs in matching are sorted into two classes: nonzero-code to nonzero-code and nonzero-code to zero-code, which are marked with “1” and “2,” respectively. The saliency (nonzero-code) is marked by diagonal lines.

As for the first class, its likelihood can be modeled according to their distance between and . It is clear that the closer the orientation is, the higher the likelihood will be. Taking two random orientations, and , as an example, their likelihood probability density can be modeled as where is a hat-like function which is high in the middle but low on both sides, for example, the Gaussian function. defines its bandwidth. is the intersection angle between the two orientations, and , which is calculated by is a constant which ensures satisfy the boundary condition of .

Figure 3 shows an example of when . It indicates that the bigger the intersection angle is, the lower the likelihood will be, and it approaches to 0 as tend to . Combining (10) and (11), the evidence probability density can be obtained:

414785.fig.003
Figure 3: The likelihood probability density versus the intersection angle based on the Gaussian function .

The continuous function, , is just a smoothed form of by using the function , and they will be over-lapped when , as shown in Figure 4.

414785.fig.004
Figure 4: Curves of and under different . Apparently, is a smoothed result of . The bigger is, the smoother will be.

With regard to the nonzero-code to zero-code matching, it can be treated as mismatching, and its likelihood can be defined as 0. Finally, we have

3.4. Practical Issues

Now, the BMI of and can be derived by substituting , , ,  and in (6) with , , , and , respectively. First, some issues should be noticed.(1)BMI will be defined as 0 if the likelihood equals 0, such as the second class.(2)We can see that there do exist some pixel pairs that satisfy , which results in a negative BMI. Sometimes, the mean BMI may be negative, too, especially in those mismatched locations. But it never occurs in the standard MI [22]. In this study, the pixel pairs with negative BMI are regarded as mismatched, and their BMIs are limited to zero, so that they are consistent with the second class.

Finally, the BMI-based matching formula between and can be derived as

The true matching of the template image can be obtained by maximizing the through out the interval of the reference image .

4. Experimental Validation

To evaluate the performance of the proposed method, firstly, ten pairs of IR and VS images are used in the test as shown in Figures 5 and 6, respectively. The IR images are captured by a long-wave IR camera, whose sizes are , and they are taken as the template image; the VS images are captured by a CCD camera, whose sizes are , and they are taken as the reference images. All the images are captured on a sunny afternoon, with only some random translation transform between them. As the ground truth is unknown for the sets of real images, it is estimated by manually matching in Photoshop. We also test these four methods on two pairs of image scropped from [29], as shown in Figures 7 and 8.

414785.fig.005
Figure 5: Ten groups of IR reference images.
414785.fig.006
Figure 6: Ten groups of VS reference images.
fig7
Figure 7: (Group 11) A pair of downtown images captured at night and in daytime.
fig8
Figure 8: (Group 12) A pair of images of Mauna Loa.

The matching results obtained from the proposed method are compared with the NMI, GNMI [19], and GCMI methods [15]. These methods are chosen because they have been widely used in multisensor image matching, and they provide relatively good matching performance. In the NMI and GNMI methods, the densities are estimated by JH with a fixed number of 64 * 64 bins for both images. The intensity values are linearly mapped into these bins. The parameters used in these methods are , , and , the matching results of our method are shown in Figure 10.

The comparison results of the NMI, GNMI, GCMI, and our method are shown in Figure 9. A global searching strategy was applied to get a full view of the correlation curve. The figures indicate the following.(1)The BMI outperforms the NMI significantly. It can be seen that (1) the difference between the highest and the 2nd higher peak, which measures the interference suppressing ability, is more salient in BMI than in NMI through all the tests. In particular, the NMI mismatches in the 8th, 9th, and 11st test, while the BMI is still applicable. (2) The highest peak of BMI, which measures the position accuracy, is much sharper than that of NMI. The matching results are shown in Table 1, from which we can see that all the methods have matching errors, but our method is more accurate.(2)Because GNMI, GCMI, and BMI adopt gradient information as measure, the performances of our method are similar to the GNMI and GCMI; they all have similar matching accuracy and the salient highest peak. However, as discussed previously, the computation of our method is remarkably cost saving when the LUT technique is used. In the MATLAB environment, the cost ratio of the four methods is about 1 : 2 : 3 : 4 (BMI : GCMI : NMI : GNMI), and the proportion can be further improved when using other program tools, such as the C and C++ language.

tab1
Table 1: Comparison of the position accuracy.
fig9
Figure 9: Matching curves of the BMI (the 4th row) compared with the NMI (the 1st row), GNMI (the 2nd row), and GCMI (the 3rd row).
414785.fig.0010
Figure 10: Matching results of the BMI.

Additional experiments have also been done to test the robustness according to the parameter settings, including , , and , which refer to (10), (11), and (14), respectively. The results are listed below.(1)When , the noise increases and it makes the matching curve of BMI not so smooth, but the main peak of the curve remains unchanged. When , the salient regions shrink, and the position accuracy decreases.(2)The change of has no substantial influence on , as long as .(3) determines the cover range of the likelihood, and the higher is, the smoother the curve will be and, hence, the more robust the BMI will be.

5. Conclusions

In this paper, a robust Bayesian estimated mutual information method for multisensor images matching is proposed. Our method is universal as long as the prior and the likelihood can be modeled, and we have implemented it by using the prior information of gradient, in which the prior probability density is estimated by the KDE method, and the likelihoods are modeled, respectively, according to the class of the matching pixel pair. Our method is compared with the standard NMI and GNMI by experimenting on some multisensor images, and the results show that (1) our method provides higher robustness and position accuracy than NMI; (2) as the keys of BMI can be precalculated offline, our method is remarkably cost saving; (3) the parameters of the BMI can be set easily, which facilitate its application.

Further studies include improving the matching accuracy by suppressing the speckle noise and refining the saliency mask. Moreover, to enhance its robustness, additional information can be introduced according to the practical conditions.

References

  1. M. Irani and P. Anandan, “Robust multi-sensor image alignment,” in Proceedings of the 6th IEEE International Conference on Computer Vision, pp. 959–966, January 1998. View at Scopus
  2. J. Chen and J. Tian, “Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor,” Progress in Natural Science, vol. 19, no. 5, pp. 643–651, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. L. Huang and Z. Li, “Feature-based image registration using the shape context,” International Journal of Remote Sensing, vol. 31, no. 8, pp. 2169–2177, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. J. B. A. Maintz, P. A. van den Elsen, and M. A. Viergever, “Comparison of edge-based and ridge-based registration of CT and MR brain images,” Medical Image Analysis, vol. 1, no. 2, pp. 151–161, 1996. View at Scopus
  5. Y. Keller and A. Averbuch, “Multisensor image registration via implicit similarity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 794–801, 2006. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Orchard and R. Mann, “Registering a multisensor ensemble of images,” IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1236–1247, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens, “Multimodality image registration by maximization of mutual information,” IEEE Transactions on Medical Imaging, vol. 16, no. 2, pp. 187–198, 1997. View at Scopus
  8. P. Viola and W. M. Wells III, “Alignment by maximization of mutual information,” in Proceedings of the 5th International Conference on Computer Vision, pp. 16–23, June 1995. View at Scopus
  9. C. Studholme, D. L. G. Hill, and D. J. Hawkes, “An overlap invariant entropy measure of 3D medical image alignment,” Pattern Recognition, vol. 32, no. 1, pp. 71–86, 1999. View at Scopus
  10. A. Roche, G. Malandain, X. Pennec, and N. Ayache, “The correlation ratio as a new similarity measure for multimodal image registration,” in Proceedings of the 1st International Conference on Medical Image Computing and Computer- Assisted Intervention (MICCAI ’98), vol. 1496 of Lecture Notes in Computer Science, pp. 1115–1124, Springer, Cambridge, Mass, USA, October 1998.
  11. C. Studholme, D. L. G. Hill, and D. J. Hawkes, “Incorporating connected region labelling into automated image registration using mutual information,” in Proceedings of the Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA ’96), pp. 23–31, June 1996. View at Scopus
  12. Y. Guo and C.-C. Lu, “Multi-modality image registration using mutual information based on gradient vector flow,” in Proceedings of the 18th IEEE International Conference on Pattern Recognition (ICPR '06), vol. 3, pp. 697–700, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. S. Kim, J. H. Lee, and J. B. Ra, “Multi-sensor image registration based on intensity and edge orientation information,” Pattern Recognition, vol. 41, no. 11, pp. 3356–3365, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. J. H. Lee, Y. S. Kim, D. Lee, D.-G. Kang, and J. B. Ra, “Robust CCD and IR image registration using gradient-based statistical information,” IEEE Signal Processing Letters, vol. 17, no. 4, pp. 347–350, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Wang and J. Tian, “Image registration based on maximization of gradient code mutual information,” Image Analysis and Stereology, vol. 24, pp. 1–7, 2005.
  16. J. Liu, J. Tian, and Y. Dai, “Multi-modal medical image registration based on adaptive combination of intensity and gradient field mutual information,” Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, pp. 1429–1432, 2006. View at Scopus
  17. X. Fan, H. Rhody, and E. Saber, “A spatial-feature-enhanced MMI algorithm for multimodal airborne image registration,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 6, pp. 2580–2589, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Loeckx, P. Slagmolen, F. Maes, D. Vandermeulen, and P. Suetens, “Nonrigid image registration using conditional mutual information,” IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 19–29, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, “Image registration by maximization of combined mutual information and gradient information,” IEEE Transactions on Medical Imaging, vol. 19, no. 8, pp. 809–814, 2000. View at Publisher · View at Google Scholar · View at Scopus
  20. K. S. Kim, J. H. Lee, and J. B. Ra, “Robust multi-sensor image registration by enhancing statistical correlation,” in Proceedings of the 7th International Conference on Information Fusion, pp. 380–386, July 2005. View at Publisher · View at Google Scholar · View at Scopus
  21. B. Qin, Z. Gu, X. Sun, and Y. Lv, “Registration of images with outliers using joint saliency map,” IEEE Signal Processing Letters, vol. 17, no. 1, pp. 91–94, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. J. P. W. Pluim, J. B. A. A. Maintz, and M. A. Viergever, “Mutual-information-based registration of medical images: a survey,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 986–1004, 2003. View at Publisher · View at Google Scholar · View at Scopus
  23. T. M. Cover and J. A. Thomas, Elements of Information Theory, John Wiley & Sons, Hoboken, NJ, USA, 2nd edition, 2006.
  24. D. Mahapatra and Y. Sun, “Rigid registration of renal perfusion images using a neurobiology-based visual saliency model,” Eurasip Journal on Image and Video Processing, vol. 2010, Article ID 195640, 16 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. E. Parzen, “On estimation of a probability density funciton and mode,” The Annals of Mathematical Statistics, vol. 33, pp. 1065–1076, 1962.
  26. H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information: criteria of Max-Dependency, Max-Relevance, and Min-Redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226–1238, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. P. Thévenaz and M. Unser, “Optimization of mutual information for multiresolution image registration,” IEEE Transactions on Image Processing, vol. 9, no. 12, pp. 2083–2099, 2000. View at Publisher · View at Google Scholar · View at Scopus
  28. D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, 2002. View at Publisher · View at Google Scholar · View at Scopus
  29. G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 11, pp. 1973–1989, 2007. View at Publisher · View at Google Scholar · View at Scopus