Research Article  Open Access
Shiping Guo, Hongqiang Lv, Yongyi Liu, Rongzhi Zhang, Jisheng Li, "Multichannel Parallel Deblurring and Collaborative Registration Using Gaussian Total Variation Regularization for Image Fusion", Mathematical Problems in Engineering, vol. 2016, Article ID 9491326, 10 pages, 2016. https://doi.org/10.1155/2016/9491326
Multichannel Parallel Deblurring and Collaborative Registration Using Gaussian Total Variation Regularization for Image Fusion
Abstract
We focus on the multichannel image fusion problem for the purpose of reaching the diffractionlimited resolution of turbulencedegraded images observed by multiple acquisition channels. A hybrid strategy consisting of multichannel parallel deblurring followed by collaborative registration is developed for the final fusion. In particular, a Gaussian total variation regularization scheme taking advantage of loworder Gaussian derivative operators is proposed, which integrates the deblurring and registration problems into a unified mathematical formalization. Specifically, the gradient magnitude of Gaussian operator is proposed to define the total variation norm, and the Laplacian of Gaussian operator is used to adjust the regularization parameter when searching the extremum in each iterative step. In addition, the coordination technique involving the regularization parameter among different channels is also considered.
1. Introduction
Multichannel image fusion is an important task for many engineering applications in signal processing, communications, and control [1]. For most common singleinput multipleoutput (SIMO) image acquisition model [2], the term “multichannel” often refers to multisensor imaging, where the channel denotes the level of resolution or spectral band, or multiframe imaging, where the channel represents the imagery condition at that observation time. In image fusion, several images of the same object are observed from different channels and cooperatively achieve the goal of reaching more superior image quality of that object. It is noteworthy that the definition of image quality is determined by specific applications [3].
Imaging through atmospheric turbulence is our main application area, where groundbased astronomical observation, artificial space objects identification, or space situational awareness is often included [4]. In these applications, the target images observed through single or distributed adaptive optics (AO) telescopes are severely blurred by atmosphereinduced optical deviation despite the AO compensation [5]. Thus in these imaging scenarios, the goal of multichannel image fusion is to reach or approach the diffractionlimited resolution. In addition, artificial space objects are not stationary and always rotate in groundbased imaging, which means that the images among different channels are spatially misaligned [6]. Therefore, multichannel image registration and multichannel deblurring are two key steps towards highprecision fusion.
In multichannel deblurring, the spatially shiftinvariant imaging system, that is, linear convolution imaging model, is often assumed [7], and, with this assumption, the image acquisition model is where denotes the convolution operation; is the number of channels; , and are the observed image, residual atmospheric point spread function (PSF), and additive noise, respectively; represents the sharp object image; and refers to the spatial coordinates in images. When using the image acquisition model of (1), the multichannel deblurring is also known as multichannel blind deconvolution, and the aim of deconvolution or deblurring is to obtain the most likely estimation of object from the blurred and noisy images . A considerable number of multichannel blind deconvolution algorithms had been developed in previous work [8]; nevertheless, all of them required that individual channels be spatially aligned. As mentioned above, spatial transformation, like translation, rotation, or complex nonlinear displacement, often exists among different channels in our practical situations. In this case, accurate image acquisition model involving both blurring and misalignment for the purpose of image fusion application is [2, 6]where is used to describe the deformation field between the reference image and the channel.
Based on an intuitional perception of (2), traditional AO image fusion framework consisted of an uncooperative registration step followed by a centralized deconvolution step, and the flowchart of this framework is shown in Figure 1. However, two key disadvantages were present: first, although there had been lots of successful registration algorithms, highprecision registration results, especially for blurred channels, can rarely be realized, which would result in unexpected artifacts in the following deconvolution stage; second, although the centralized deconvolution can take advantage of the complementary information among different channels to improve the deblurring levels, the information with redundancy and diversity was not adequately exploited in the registration stage, which indicates that coupled image registration methods between each registered pair need to be studied.
In this work, in order to avoid the unexpected artifacts brought by misregistration for the deblurring step, we present that the deblurring step can be finished in advance before the registration step in the application of AO image fusion. Within this framework, centralized multiframe blind deconvolution turns into a form of multichannel parallel implementation, and the coordination among different channels is achieved in the registration and final fusion steps. In particular, we propose a Gaussian total variation regularization scheme for both parallel deconvolution and collaborative registration, in which loworder Gaussian derivative operators are fully taken into account for the performance improvement of the deblurring and registration, and the coordination technique between the reference channel and misaligned channel is initially studied for more accurate registration. Technical details of our proposed method are specified in the following section.
2. Materials and Methods
2.1. Problem Formulation
Now we reformulate the problem to be addressed in this paper more clearly: we try to reach the diffractionlimited resolution of turbulencedegraded images by the image fusion technology, and the available data is the degraded images observed through multiple acquisition channels yielded to the imaging model of (2). Our proposed solution is a hybrid scheme: first, the main framework of our method consists of two sequential steps, that is, multichannel parallel deconvolution and collaborative registration; second, both deconvolution and registration problems are considered as an anisotropic diffusion process and are iteratively solved by Gaussian total variation regularization; third, loworder Gaussian derivative operators are employed to define the total variation norm and adjust the regularization parameter; last, the coordination between different channels is achieved through the regularization parameter in collaborative registration.
Figure 2 is an illustration of our proposed framework for the application of AO image fusion, which is totally different from existing methods. We propose to model the deconvolution and registration by a unified mathematical formalization using total variation regularization mainly owing to its significant edge preserving ability [9]. Formally, the total variation regularization is to seek the minimization of the following energy functional [10, 11]:where is the target variable to be solved (for deconvolution, refers to sharp object image; for registration, refers to the deformation field), is the data fidelity term of , is the total variation prior term imposed on , and is the regularization parameter. These three factors as well as the numerical algorithms for minimization are the main research interests of the total variation regularization, and the exact definition of each factor will depend on specific applications. The construction of and for deconvolution and registration is our main contribution and will be described in detail. The numerical algorithm is not our concern in this paper, most of the numerical optimization methods are computationally difficult [12], and we directly employ the simplex gradient descent algorithm for both deconvolution and registration steps.
2.2. Parallel Deblurring
Deblurring of observed images is the first step in our fusion framework. With multiframe images from multiple channels in hand, deblurring is achieved through a parallel mode instead of centralized multiframe deconvolution. Single image Gaussian total variation blind deconvolution will be performed in each separate channel, and the deconvolution output of the channel is the sharp image . Of course, all outputs , , are spatially misaligned. In this subsection, the subscript in each variable will be omitted for the sake of simplicity. For single channel total variation blind deconvolution, the energy functional iswhere is the regularization parameter for deblurring. Residual atmospheric PSF is the byproduct of deconvolution outputs.
Firstly, we construct the fidelity term . The form of the fidelity term is determined by the statistic model of the noise . Typical noise models and their corresponding are [13], for Gaussian noise (the model), for impulsive noise (the model), for Poisson noise (the model),Here and denote the norm and the norm, respectively. For groundbased AO imaging, the additive noise contains major photon noise and minor electronic noise, so in this work we take the general least square fitting term of (5) as the fidelity term in each single channel deblurring.
Secondly, we construct the total variation prior term . The discrete form of the total variation norm is generally defined by [10]where denotes the discrete gradient and and are the simple forward finitedifference operators along the horizontal and vertical directions, respectively. The improved definitions of total variation norm include the highorder total variation [14] and fractionalorder total variation [15] to balance the edge and flat regions. However, these definitions are all based on the forward difference computations and might result in large sensitivity to the noise or unexpected staircase effect.
Previous work [16, 17] had pointed out that the total variation regularization can be viewed as a special case of the anisotropic diffusion process and indicated that a family of increasingly sharp images can be derived from the following time evolution model:with the initial value , where . The second term on the right hand side of (9), that is, , is the anisotropic diffusion operator (ADO) with diffusion coefficient , which controls the diffusion behavior of the evolution model. The success of total variation regularization lies in the fact that the ADO only diffuses along the orthogonal directions of the gradient (i.e., the tangential directions of edges) and thus allows recovery of the edges in object image. However, edges do not actually exist in flat regions; thus, diffusing still only along the edge directions results in less noise suppression, even producing undesired false edges (namely, staircase effect).
To restrain the staircase effect, in this work, we propose to replace the general finitedifference gradient operators and with Gaussian smoothed gradient operators and , which are defined by [18, 19]where is the Gaussian function with scale parameter and denotes the horizontal or vertical direction. After using the Gaussian gradient operators, the total variation norm term in (8) turns into the form of Gaussian total variation regularization term: and the corresponding ADO turns into , where denotes the Gaussian gradient. There are two advantages for using the firstorder Gaussian partial derivative operators instead of simple finitedifference operators when constructing the total variation norm: first, Gaussian gradient operation can reduce part of the noise in advance and then decreases the possibility of taking the noise as false edges, whereas the finitedifference operators are sensitive to noise; second, the gradient magnitude of Gaussian (GMG) operator can remove a significant amount of image spatial redundancies and better extract lines, edges, or corners, and so forth, in object images; thus, diffusion coefficient can more precisely control the diffusion behavior with the iterations marching.
In fact, we can see from (9) that the difference between using the general finitedifference gradient operators and Gaussian gradient operators is the ADO term; in each iteration, the former ADO is , but the latter turns into . The two ADOs of the first iteration when restoring a blurred satellite image are shown in Figure 3, the turbulence strength is , where is the telescope aperture and is the Fried parameter, and the SNR in blurred image is 33 dB. As can be seen, using Gaussian gradient operators results in diffusing less along the false edge directions than using the simple finitedifference gradient operators, from the perspective of anisotropic diffusion, which is helpful for reducing the staircase effect and faster reaching the steady state of (9).
(a)
(b)
(c)
(d)
Thirdly, we construct the regularization parameter . There had been several potential strategies for determining this parameter [20] and even a systematic construction model had been proposed [21]; however, these strategies are generalized; it is still intractable when selecting this important parameter for AO images deconvolution. Two instructive guidelines are generally accepted when choosing this parameter: first, the selection of should provide the spatially adaptive property [22]; that is, should be large in flat regions mainly to reduce the noise, while it should be small in distinct gradient regions mainly to preserve the edges and detailed information; second, the general model of the parameter can be constructed by the formulation of [21], where is an edge indicator, is a constant to adjust the intensity, is the preset intensity factor for current iteration, and is the constructor function.
In this work, three specific and effective strategies are adopted to determine the parameter for total variation deconvolution of groundbased AO images. According to the construction model mentioned above, first, we take as the constructor function since it can provide smaller normalized mean square error (NMSE) [21]; second, iteratively decreasing the value of helps in avoiding the trivial solution and getting closer to the sharp image [23], or it can be said that earlier iterations are primarily served for denoising and later iterations are mainly functioned as restoring edges and details; third but crucially, we use the secondorder Gaussian derivative, that is, Laplacian of Gaussian (LoG) operator, to construct the edge indicator , which is defined by [18, 19]in this way, the edge indicator is , where computes the absolute value of a matrix to ensure the nonnegative property. The LoG filtering is considered to carry crucial structural information of an image on the position of its zero crossing and had been widely used to characterize various image semantic structures, so we propose to take the LoG filtering as the edge indicator when determining the parameter . In addition, the flat regions of a groundbased AO image contain the general flat regions of the space object itself and the absolutely flat region of the deep space; thus, the value of in these different regions should be distinguishing; from this point of view, secondorder derivative operators are better qualified for the edge indicator than the firstorder ones. Combining the above three strategies together, the regularization parameter is finally computed bywith respect to the spatial coordinates .
At this point, we reformulate the single channel deblurring problem; specifically, we try to solve the blind deconvolution problem of (2) by the following Gaussian total variation (GTV) minimizationwhere is in the form of (5), is determined by (13), and is defined by (11). We use the acronym GTV because both and terms take advantage of the Gaussian derivative operators. In order to obtain the physical solution, as mentioned in [24], additional constraints need to be imposed on both image and PSF . Here only the nonnegative conditions on and , that is, , and the identity condition on , that is, , are incorporated into (14), besides which any other a priori assumption is not adopted.
A general solution to the formulation of (14) is the alternating minimization (AM), that is, converting the nonconvex optimization problem of (14) into two constrained convex problems, and iteratively alternating between the estimation of image and PSF . The two subproblems are, given , searching the solution of bysubject to , and, given , searching the solution of bysubject to and . We directly employ the gradient descent algorithm in each step for the above two subproblems, which results in the fact that the image and PSF are alternately updated byat iteration, where and and are the step size. Additional constraints on and are enforced in a subsequent step, respectively. The whole procedure of our single channel GTV blind deconvolution algorithm is finally listed in Algorithm 1.

The GTV blind deconvolution algorithm will be executed in parallel for all channels, and, accordingly, the outputs of the multichannel deconvolution step are sharp images with spatial misalignments. As illustrated in Figure 2, outputs of all channels are just the input quantities for the next registration step.
2.3. Collaborative Registration
Registration of the deconvolution outputs is the second step in our fusion framework. With multichannel sharp but spatially misaligned images , , in hand, the registration between the reference channel and other transformed channels is achieved through a tightly cooperative way. A reference image from certain acquisition channel needs to be selected in advance by human subjective evaluation or objective image quality assessment metric, and then the registration algorithm will be executed between all registered pairs constituted by the selected reference image and other misaligned images. The outputs of the registration step are sharp and spatially aligned multichannel images.
Given the reference image and the transformed image in the image domain , the task of the registration problem is to seek for the displacement field that the transformed image optimally matches with the reference image ; that is [25],Following the deblurring step, the deconvolution output of the reference channel is , other outputs are , , and then the correspondences in the registration step areAlso, in this subsection, the subscript in each variable will be omitted for the sake of simplicity.
Various registration methods had been developed in previous studies [26]. In this work, we focus on the total variation regularization method since it is more robust to outliers than diffusionbased methods and allows for important discontinuities of the displacement field. In variational image registration approaches [11], optimal displacement field is solved by minimizing the energy functional :where is the (dis)similarity measure between the reference image and transformed image , is the prior constraint term or the regularization term, and is the regularization parameter compromising and .
We firstly construct the similarity measure term . The selection of the similarity measure metric will depend on specific application. For groundbased AO imaging, atmospheric degradation can maintain the total image energy in each imaging channel according to the prior assumption of on atmospheric PSF ; in addition, the total variation blind deconvolution will provide the same rate of energy change as long as the iterations are equal for all channels. Thus, it can be assumed that the candidate images to be registered have the same intensity. In this case, the sum of squared differences (SSD) metric is suitable for the similarity measure, which is defined in discrete form by [27]
Secondly, we construct the regularization term . In the application of groundbased AO image registration, the displacement field usually contains the smooth component and the piecewise constant component [25]. Traditional quadratic regularization term is qualified for penalizing the smooth component, and variational or PDEbased regularization term is capable of penalizing the discontinuous component, which may provide important structural features of space objects. As mentioned above, the total variation regularization method is employed and improved for the registration step. Although the total variation norm well models the piecewise constant motion of the displacement field, the staircase effect is undesired. Gaussian total variation regularization has been proposed to compromise the penalty for two components in the deblurring step and will be inherited in this registration step. According to the definition of Gaussian total variation norm from (10) and (11), the regularization term for registration is in the form of
The last element that needs to be constructed is the regularization parameter . The selection of should also provide the spatially adaptive property in both flat regions and discontinuous regions. Previous studies [27] suggested that the optimal choice of can be determined by the noise statistics of registered image pairs or a trained professional. In this work, we propose to determine the regularization parameter in a tightly collaborative way in consideration of practical requirements of our application. Specifically, the construction model of the parameter follows the formulation of from the deblurring step, and we use the gradient magnitude of Gaussian (GMG) operator to construct the edge indicator . It is noteworthy that other channels try to align to the reference channel when doing the registration; here, we propose to force the edge indicator to be derived from the reference image , that is, , for each registered image pair. Finally, the regularization parameter for the registration step is computed by where is the preset intensity factor for iteration and is a constant to adjust the intensity. In this way, the coordination between a sequence of registered image pairs is exactly achieved through the regularization parameter .
At this point, all elements for the energy functional of (20) have been constructed, and, by viewing the solution for minimization of (20) as an artificial time evolution [28, 29], the gradient descent process iswith an initial value , where denotes the abbreviated form of to enhance readability. Discretizing the time evolution through a step size , the displacement field is iteratively updated by with being the index of iteration. Our collaborative registration algorithm for each registered pair is listed in Algorithm 2. Once the displacement field is solved, the transformed image of that channel can align to the reference image .

Collaborative registration algorithm will be executed for all registered image pairs, and, accordingly, the outputs of the registration step are sharp and spatially aligned images, which can be used for the final fusion step. We will not present or improve certain specific fusion algorithm in this work since existing fusion algorithms, such as waveletbased methods [1], can be directly adopted to effectively fuse sharp and spatially aligned images.
3. Results and Discussion
Numerical experiments were implemented to test the performance of our method. The turbulencedegraded images observed through two channels were simulated independently according to Noll’s [30] method using 151 order Zernike polynomials, and Poisson noise was added subsequently. We firstly test the accuracy and stability of parallel deblurring, which is the critical step for the integrated fusion framework. For comparison, original total variation (TV) regularization [23, 24] was also tested, and latest improvements for TVbased methods can be easily incorporated into both original TV and our GTV regularization.
Several parameters should be initialized as shown in Algorithm 1. We empirically set , , , and in satellite image deconvolution, and the restored results until 8500th iteration are shown in Figure 4. The turbulence strength in blurred image (a) is and the noise level is 33 dB. We can see from Figure 4 that the GTV deblurring result of (c) is more smoothed in flat regions but more sharp in edges and detailed regions than the TV deblurring result of (b). The NMSE between the diffractionlimited satellite image and (a), (b), and (c) in Figure 4 is 0.1253, 0.0494, and 0.0399, respectively; here, the NMSE is used to measure the performance of different deconvolution methods, which is defined by [21], where and are diffractionlimited image and restored result, respectively. In addition, plot (d) of Figure 4 shows the 8500th spatial intensity distribution of the regularization parameter , which obviously conveys spatially adaptive property. We had also found that this property is more and more distinct with the iteration marching, which is the direct result of taking the strategy of using the LoG operator together with iteratively decreasing the value of , especially for the later stage of the whole minimization. In later iterations, the image noise is negligible and the details are gradually recovered, and, then, the edge indicator of LoG filtering increasingly takes effect.
(a)
(b)
(c)
(d)
The NMSE of TV and GTV deblurring results versus the iteration number are shown in Figure 5(a). As can be seen, the GTV method always keeps smaller NMSE with the iterations going on; moreover, the GTV method significantly restrains the instability in the earlier stage (before 3000th iteration or so) of the whole minimization process. The instability in GTV is unobvious mainly due to the less sensitivity to the noise by using the Gaussian gradient operators. In later stage (after 6000th iteration or so), the tiny but meaningful improvement of NMSE derives from a matter that the diffusion coefficient can evolve the diffusion behavior more precisely than does.
(a)
(b)
The relative errors of the successive estimations (RESE, defined by [22]) of TV and GTV deconvolution results are shown in Figure 5(b), which reveals different convergence properties. GTV can improve the performance on both convergence speed and stability, and it reduces the oscillation of RESE significantly and converges faster (about 1500 iterations in this experiment) than the TV method. We believe that the use of LoG as well as GMG is beneficial to the improvement of the convergence behavior.
We secondly test the performance of collaborative registration. Both deblurring and registration problems are solved by Gaussian total variation regularization in our proposed framework, and the accuracy and robustness of the GTV norm had been proved in preceding experimental results, so the following experiments will focus on the quantitative comparison of the registered accuracy with coordination and without coordination. For quantitative evaluation, the assessment metrics are needed. In our experiments, we use the mutual information (MI) and the correlation coefficient (CC) between the reference channel image and the transformed image as evaluation metrics [29], which are defined bywhere computes the covariance and and denote the standard deviation of and , respectively; andwhere and compute the marginal entropies and computes the joint entropy.
The comparison results in the form of CC metric versus iteration numbers in the satellite image registration are shown in Figure 6, the turbulence intensity in this experiment is , and the initial CC value of two channels deconvolution outputs is 0.72. We found that the case of registration with coordination converged faster than the case of registration without coordination, and the final registration accuracy was also increased a little. The performance improvement is because the collaborative registration can take advantage of the structural features of the reference image transmitted by the regularization parameter.
In groundbased AO image fusion, another consideration is the registration performance in different atmospheric turbulence intensities. The comparison results in the form of normalized MI metric versus turbulence intensity (in terms of ) in the satellite image registration are shown in Figure 7. The mutual information measures how the intensity distributions of the reference image and deformed image fail to be independent [31]; the larger MI value means that there is more dependency between two images. It can be seen from Figure 7 that the collaborative registration always provided better accuracy in different turbulence intensities, and the MI index did not decrease obviously even in strong turbulence condition. It can be considered that both the highprecision deconvolution in the preceding step and collaborative registration contribute to these final results.
4. Conclusions
A hybrid scheme for groundbased adaptive optics image fusion was presented in this paper. A multichannel fusion framework consisting of parallel deconvolution followed by collaborative registration was proposed, and a Gaussian total variation regularization technique was finally introduced to improve the accuracy and the robustness of both deblurring and registration. The performance of the proposed method was also tested and analyzed through simulated experiments. This work was a preliminary study on applying the thought of distributed coordination to AO image fusion application; more effective coordination mechanism between different image acquisition channels will be further studied in future work.
Competing Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the China Postdoctoral Science Foundation via Grant no. 2015M580851, the National Natural Science Foundation of China via Grant no. 61308083, and the Space Information Engineering Collaborative Innovation Center Fund.
References
 H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at: Publisher Site  Google Scholar
 F. Şroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Transactions on Image Processing, vol. 14, no. 7, pp. 874–883, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 M. Müller, W. Krüger, and G. Saur, “Robust image registration for fusion,” Information Fusion, vol. 8, no. 4, pp. 347–353, 2007. View at: Publisher Site  Google Scholar
 J. J. Green and B. R. Hunt, “Improved restoration of space object imagery,” Journal of the Optical Society of America A, vol. 16, no. 12, pp. 2859–2865, 1999. View at: Publisher Site  Google Scholar
 S. M. Jefferies and M. Hart, “Deconvolution from wave front sensing using the frozen flow hypothesis,” Optics Express, vol. 19, no. 3, pp. 1975–1984, 2011. View at: Publisher Site  Google Scholar
 D. Shi, C. Fan, H. Shen et al., “Reconstruction of spatially misaligned and turbulence degraded images,” Optics and Lasers in Engineering, vol. 50, no. 5, pp. 703–709, 2012. View at: Publisher Site  Google Scholar
 T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” Journal of the Optical Society of America A: Optics Image Science and Vision, vol. 10, no. 5, pp. 1064–1073, 1993. View at: Publisher Site  Google Scholar
 F. Šroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1687–1700, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992. View at: Publisher Site  Google Scholar  MathSciNet
 S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented Lagrangian method for total variation video restoration,” IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3097–3111, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 M. Droske and M. Rumpf, “A variational approach to nonrigid morphological image registration,” SIAM Journal on Applied Mathematics, vol. 64, no. 2, pp. 668–687, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 T. Chan, S. Esedoglu, F. Park, and A. Yip, “Recent developments in total variation image restoration,” in Handbook of Mathematical Models in Computer Vision, vol. 17, Springer, 2005. View at: Google Scholar
 C. Wu, J. Zhang, and X.C. Tai, “Augmented Lagrangian method for total variation restoration with nonquadratic fidelity,” Inverse Problems and Imaging, vol. 5, no. 1, pp. 237–261, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. Liu, T.Z. Huang, Z. Xu, and X.G. Lv, “Highorder total variationbased multiplicative noise removal with spatially adapted parameter selection,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 30, no. 10, pp. 1956–1966, 2013. View at: Publisher Site  Google Scholar
 Y. Zhang, W. Zhang, Y. Lei, and J. Zhou, “Fewview image reconstruction with fractionalorder total variation,” Journal of the Optical Society of America A, vol. 31, no. 5, pp. 981–995, 2014. View at: Publisher Site  Google Scholar
 D. M. Strong and T. F. Chan, “Spatially and scale adaptive total variation based regularization and anisotropic diffusion in image processing,” in Diffusion in Image Processing, UCLA Math Department CAM Report, Citeseer, 1996. View at: Google Scholar
 Y.L. You and M. Kaveh, “Blind image restoration by anisotropic regularization,” IEEE Transactions on Image Processing, vol. 8, no. 3, pp. 396–407, 1999. View at: Publisher Site  Google Scholar
 J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986. View at: Publisher Site  Google Scholar
 W. Xue, X. Mou, L. Zhang, A. C. Bovik, and X. Feng, “Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features,” IEEE Transactions on Image Processing, vol. 23, no. 11, pp. 4850–4862, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 H. Liao, F. Li, and M. K. Ng, “Selection of regularization parameter in total variation image restoration,” Journal of the Optical Society of America A, vol. 26, no. 11, pp. 2311–2320, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 G. Gong, H. Zhang, and M. Yao, “Construction model for total variation regularization parameter,” Optics Express, vol. 22, no. 9, pp. 10500–10508, 2014. View at: Publisher Site  Google Scholar
 L. Yan, H. Fang, and S. Zhong, “Blind image deconvolution with spatially adaptive total variation regularization,” Optics Letters, vol. 37, no. 14, pp. 2778–2780, 2012. View at: Publisher Site  Google Scholar
 D. Perrone and P. Favaro, “Total variation blind deconvolution: the devil is in the details,” in Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 2909–2916, IEEE, June 2014. View at: Publisher Site  Google Scholar
 T. F. Chan and C.K. Wong, “Total variation blind deconvolution,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370–375, 1998. View at: Publisher Site  Google Scholar
 X. Yang, C. Shen, F. Li, and C. Shen, “A combination of the total variation filter and a fourthorder filter for image registration,” Mathematical Problems in Engineering, vol. 2015, Article ID 235314, 16 pages, 2015. View at: Publisher Site  Google Scholar
 Z. Zhang and R. S. Blum, “A hybrid image registration technique for a digital camera image fusion application,” Information Fusion, vol. 2, no. 2, pp. 135–149, 2001. View at: Publisher Site  Google Scholar
 E. Haber and J. Modersitzki, “A multilevel method for image registration,” SIAM Journal on Scientific Computing, vol. 27, no. 5, pp. 1594–1607, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 J.P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell's demons,” Medical Image Analysis, vol. 2, no. 3, pp. 243–260, 1998. View at: Publisher Site  Google Scholar
 A. SchmidtRichberg, “Variational image registration,” in Registration Methods for Pulmonary Image Analysis, pp. 17–38, Springer, 2014. View at: Google Scholar
 R. J. Noll, “Zernike polynomials and atmosphericturbulence,” Journal of the Optical Society of America, vol. 66, no. 3, pp. 207–211, 1976. View at: Publisher Site  Google Scholar
 G. Hermosillo, C. Chefd, and O. Faugeras, “Variational methods for multimodal image matching,” International Journal of Computer Vision, vol. 50, no. 3, pp. 329–343, 2002. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Shiping Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.