Abstract

Image restoration is a long-standing problem in low-level computer vision. In this paper, we offer a simple but effective estimation paradigm for various image restoration problems. Specifically, we first propose a model-based Gaussian denoising method Adaptive Dual-Domain Filtering (ADDF) by learning the optimal confidence factors which are adjusted adaptively with Gaussian noise standard deviation. In addition, by generalizing this learning approach to Laplace noise, the learning algorithm of the optimum confidence factors in Laplace denoising is presented. Finally, the proposed ADDF is tactfully plugged into the method frameworks of off-the-shelf image deblurring and single image super-resolution (SISR). The approach, coining the name Plug-ADDF, achieves promising performance. Extensive experiments validate that the proposed ADDF for Gaussian and Laplace noise removals indeed results in visual and quantitative improvements over some existing state-of-the-art methods. Moreover, our Plug-ADDF for image deblurring and SISR also demonstrates superior performance objectively and subjectively.

1. Introduction

Image restoration (IR), which aims to recover the latent clean image from its degraded observation , is a classical yet extensively studied problem for its highly practical value in various low-level computer vision applications [14]. One widely used degenerate model in image restoration is , where is a degradation operator and is additive Gaussian or Laplace noise with standard deviation . By designating different degradation operator, one can obtain different IR tasks correspondingly. Three IR tasks will be image denoising when is an identity operator, image deblurring when is a blurring operator, and image super-resolution when is a composite operator of blurring and downsampling. In this paper, we focus on these three IR tasks.

Although the actual noise is not independent identically distributed (i.i.d.), the removal of Gaussian and Laplace noise is still a meaningful work. In addition, there are also denoising problems in IR tasks such as image deblurring and single image super-resolution (SISR). It is well known that BM3D [5] is a highly engineered image denoising method. The denoising quality of BM3D, which combines the advantages of patch-based techniques with transform domain filtering, is still considered state of the art today. The block matching based method produces visible artifacts in some homogeneous regions, manifesting as some low frequency noises [68]. Another method called dual-domain image denoising (DDID) [9] produces results comparable to BM3D, although it is exceptionally simple. DDID is an iterated and guided method that employs a combination of the bilateral filter (BF) [1013] in the spatial domain and the short-time Fourier transform (STFT) [14] with wavelet shrinkage in the frequency domain. These methods are not good at protecting irregular image texture details and avoiding artifacts.

Recently, a sparse approximation approach [18] is presented for high-density impulse noise removal by utilizing an inverse-distance weighing prediction model. The method firstly decomposes the input image into overlapped patches based on the detection of noise and noise-free pixels; then the potential noise-free pixels are predicted by inverse-distance weighting model; finally, an error-constrained minimization problem is solved by the orthogonal matching pursuit. At the same time, a morphological mean (MM) filter [19] is also used for the removal of high-density impulse noise by two significant modules: noise-free pixel counter (NPC) module and morphological pixel dilation (MPD) module. Specifically, the NPC module is employed to detect the number and position of the noise-free pixels, while in the subsequent MPD module, the morphological dilatation operation on the noise-free pixels is iteratively implemented to replace the neighbor noise pixels. These two approaches exhibit excellent performance in removing the noise of images contaminated with high-density impulse noise in both qualitative and quantitative terms. In contrast, the proposed adaptive dual-domain filtering (ADDF) has obvious advantages in removing Gaussian noise and Laplace noise, especially when the noise variance is large.

AC-PT [20] implements detail-preserving image denoising based on adaptive clustering of nonlocal patches and progressive PCA thresholding approach. The basic process of AC-PT is the following. Initially, the global noise level is estimated based on the Marchenko-Pastur (MP) law [21]. Subsequently, an adaptive clustering method is utilized to collect the self-similar blocks of the image by using the global nonlocal self-similarity of the image. Lastly, a progressive PCA thresholding filtering is applied to denoise the clustered patches by MP-SVD step and LMMSE step. In a word, the AC-PT method is a great improvement of BM3D, and a superior denoising performance is obtained by adaptive clustering. Nevertheless, the proposed ADDF does not need to perform a search for similar blocks in an image, thereby effectively reducing the amount of calculation.

In [22], a denoising algorithm on low-dose CT images employs a multiframe statistical tool, blind source separation (BSS) combined with a standard denoiser block matching 3-D filter (BM3D) [5]. On the one hand, the essence of BSS is to iteratively solve a numerical optimization problem using a maximum-entropy principle [23]. On the other hand, the BSS plus BM3D hybrid model with different frame numbers is applied successfully to an image sequence so as to achieve excellent CT image denoising. The method improves the quality of CT image reconstruction while increasing the complexity and computational time of the algorithm. This is mainly reflected in the optimization process of BSS.

A wavelet transform approach [24] is proposed to denoise 1-D experimental signals by selecting the number of decomposition levels and calculating new noise thresholds. This method utilizes multiresolution analysis to achieve time-frequency decomposition of the signal, thereby effectively separating the noise from the signal. To be specific, an objective method for selecting the number of wavelet decomposition levels is presented by classifying the sparsity of the detail components. This method can also formulate an appropriate threshold without noise estimation. Lots of experiments on electron spin resonance (ESR) spectroscopy [25] show that this method can effectively remove noise and accurately recover signal peaks. Generally speaking, existing sophisticated and highly effective denoising methods are touching the ceiling in the aspect of denoising performance [26], which indicates that any improvement on these basis will have to pay a heavy price.

Dual-domain filtering (DDF) (see Algorithm 1) [27], which is based on the work DDID, is a noise estimator in the spatial and frequency domains. One can see from Algorithm 1 that one of the most critical procedures in DDF is the choice of the confidence factors and . This is due to the fact that these two factors directly affect the accuracy of the noise estimate in the two domains and ultimately determine the final denoising effect. However, the confidence levels in DDF are too low to provide satisfactory results especially when the noise is severe. Therefore, we expect to learn the optimal confidence factors in the two domains to improve the accuracy of noise estimation. Meanwhile, we have noticed that although learning based approaches [2834] have recently become very popular, a mass of parameters need to be learned and the training time is often very long. Furthermore, these learning methods need a lot of samples to train the constructed network, which makes the learning process very complicated. With the above considerations, we expect to improve the performance of the model-based method by simple learning with as few samples as possible. All these motivate us to learn the optimal confidence factors in the spatial and frequency domains, so as to further improve the accuracy of noise estimation and enhance the performance of image restoration.

Step 1. Noise estimation in the spatial domain:
,
where , is the center pixel value of the filter window . and is the bilateral kernel.
Step 2. Noise estimation in the frequency domain:
,
where is the frequency kernel and is the radius of . is the Fourier coefficients
Step 3. Image denoising: initialize guide image with the noisy input , and perform the guided iteration
,
where counts down from to 1.

At present, many scholars have done a lot of works on noise estimation [3538]. However, in this paper, instead of estimating the noise parameters, we estimate the optimal confidence factors and that are closely related to noise in the spatial and frequency domains. Surprisingly, by learning on a few samples, the proposed adaptive dual-domain filtering (ADDF) has good generalization ability. The learning of the optimal confidence factors, which does not increase the computational complexity of the algorithm, makes the denoising effect of ADDF significantly better than DDF.

Inspired by the fact that Plug-and-Play ADMM [39] allows one to plug any off-the-shelf image denoising algorithm as a subproblem in the ADMM algorithm and another line of our research is to plug the proposed denoiser ADDF into the method frameworks of the other two IR tasks: image deblurring and SISR. Experimental results demonstrate that the simple learning on a small number of samples can improve the effectiveness of model-based algorithm. Moreover, the proposed method, called Plug-ADDF, can effectively improve image deblurring and SISR performance. The major contributions of this paper are summarized as follows:

(1) Two novel confidence factors for Gaussian noise estimation are presented. By learning the optimal confidence power and iteration number that vary with Gaussian noise standard deviation, the accuracy of noise estimation in ADDF is better than DDF.

(2) By generalizing the learning approach to Laplace noise removal, an algorithm for learning the optimum confidence factors is developed to accurately estimate noise, resulting in a simple yet effective Laplace denoising method. We show that ADDF in Laplace denoising performance is obviously superior to DDF.

The rest of the paper is organized as follows. Section 2 presents the learning of the optimal confidence factors in Gaussian denoising. Section 3 describes the proposed algorithm and experiments of Gaussian denoising. In Section 4, we generalize the proposed learning approach to the removal of Laplace noise. Section 5 presents image deblurring based on ADDF, while single image super-resolution based on ADDF is given in Section 6. Finally, Section 7 concludes this work.

2. The Proposed Optimal Confidence Factors for Gaussian Denoising

In DDF [27], the confidence factors for Gaussian noise estimation in the spatial and frequency domains are expressed aswhere is the default iteration number in DDF and counts down from to 1 with iterations. In Step 2 of Algorithm 1, the purpose of is to remove the noise in , and the boundary of the remaining part is polished by the bilateral kernel , so that the ringing effect can be efficaciously reduced when implementing DFT. However, when the noise is severe, the noise in is also severe and a small confidence factor makes the subtracted noise insufficient. Therefore, by adding a power , the confidence factor of the noise estimate can be increased. Thus, can get a certain increase. In addition, according to Figure 7 in the paper, we can find from the statistical perspective that, in terms of noise estimation, ADDF utilizing the increased confidence factors is more accurate than DDF with the fixed confidence factors. These indicate that we should increase the confidence factors of the noise estimate, especially when the noise is severe. With the above considerations, in this paper we propose two novel confidence factors for Gaussian noise estimation in the two domains respectively as follows:where we term “confidence power.” We here take N=8 as an example to briefly analyze the relationship between the proposed confidence factors and the original ones. As shown in Figure 1, as increases from 0 to 1, the proposed are decreasing. One can see that when , the proposed are the original confidence factors; when , are increased to the maximum value 1. Therefore, the proposed confidence factors (3) and (4) are an increase based on the original ones (1) and (2).

In our proposed confidence factors (3) and (4), the confidence power and iteration number vary adaptively with the noise standard deviation , i.e., and . Obviously, (1) and (2) are special cases of (3) and (4) when and , respectively. Therefore, the proposed confidence factors are more general forms that are generalized by the ones in DDF. These adaptive confidence factors can more robustly estimate noise, which improves the performance of image recovery. In the following, we will use a small number of training samples to learn the optimal confidence power and the optimum iteration number, which constitute the optimal confidence factors for noise estimation.

2.1. Learning the Optimal Confidence Power

Firstly, in the two-dimensional plane shown in Table 1, different values of noise standard deviation and confidence power are, respectively, listed in the horizontal and vertical axes. We specify that the location corresponding to the pair of numbers is marked as . We utilize the confidence factors (3) and (4) and the different in Table 1 to implement DDF on the training images (a) and (b) in Figure 2. Then the mean values of PSNR corresponding to the two images are placed at position . As a result, we obtain a matrix of size , which is denoted as and shown in Table 1. It should be mentioned that here the iteration number is in [27].

Next, theoretically when a specific noise standard deviation in DDF is given, there is an optimal confidence power that maximizes PSNR. From a lot of experiments, we have found that the optimal changes with different . At the same time, we hope that the relationship curve about the optimal and is monotonous and smooth. On the other hand, the sum of the PSNR is the largest when DDF with the corresponding values and is implemented. In order to achieve these goals, we design Algorithm 2.

Step 1. The element at i-th row j-th column in the matrix is denoted as . And the largest element in column
1 of is recorded as ;
Step 2. In column 2 of , the element is summed with each element above (including )
in column 1, respectively. The maximum sum is denoted as , which then replaces in A. Finally, the largest
one of the elements is marked as ;
Step 3. In column 3 of , the element is summed with each element above (including )
in column 2 respectively. And the maximum sum is denoted by , which then replaces in . Similarly, the
largest one of the elements is labeled as ;
Step 4. Similar to the Step 4, the elements in columns from 4 to 9 in are sequentially updated. The resulting
matrix is denoted as B, as shown in Table 2. Then the largest element in each column of B is respectively recorded
as ;
Step 5. Connect the elements (bold elements), and we get the curve shown in Figure 2(a).

From Figure 3(a) obtained by Algorithm 2, it can be found that the whole curve shows basically monotonically decreasing trend. In addition, when the noise is small, the reliability of v estimation changes greatly; when the noise is large, the reliability difference between the adjacent steps is small.

Finally, in order to smooth the curve in Figure 3(a), and look further for the relationship between and , we perform a curve fitting. From the general trend of the curve in Figure 3(a), we empirically conjecture that the regression polynomial of the original data basically conform to the following negative exponential function.

In the above formula (5), the parameter determines the decay rate of the function. In other words, with the increase of , the larger the parameter , the faster the decays. Here is the attenuation coefficient, which is inversely proportional to the half-decay period, and the deviation improves the agreement between the fitting curve and the experimental data.

By curve fitting with Matlab, the polynomial coefficients are acquired. Thus, the fitted polynomial isThe fitted curve is shown in Figure 3(b). It can be found that with the increase of the noise standard deviation , the optimal confidence power is adaptively monotonically decreasing. At the same time, it can be seen from Figure 1 that when decreases (where and for fixed ), the proposed confidence factors increase. Thus the greater the noise is, the greater the confidence factors and for noise estimation are, which seems to be inconsistent with our intuition. In fact, here the number of iterations is assumed to be constant, which implies that the confidence factors and should also control the intensity of denoising.

2.2. Learning the Optimum Iteration Number

On the basis of the optimal confidence power, this part will focus on learning the optimal iteration number. We empirically speculate that the iterative number should also be adjusted adaptively with noise standard deviation . Utilizing the idea the preliminary experimental results on the image (a) in Figure 2 are shown in Figure 4(a). It is exhilarating that the experimental results are consistent with our conjecture, which encourages us to further explore the optimal relationship between and . Thus, we take advantage of the learned optimal in (6) and increase the iteration number to implement DDF on the training images (a) and (c) in Figure 2. The results are shown in Table 3.

It can be found from Table 3 that when is small (such as ), PSNR decreases with increasing N. When the noise gets severe, this trend has changed. For the phenomenon, here we simply do some analysis of the reason. As mentioned in Step 2 of Algorithm 1, the purpose of is to remove the noise in , and the boundary of the remaining part is polished by the bilateral kernel , so that the ringing effect can be efficaciously reduced when implementing DFT. When is small, the noise in is also small, so the confidence factor in the subtracted cannot be too large. However, as increases, the confidence factor also increases (for the same and ), which makes the noise estimation not accurate enough, resulting in a downward trend in PSNR. As shown in Table 3, this situation will change as gets large.

According to Table 3, in order to fit a strictly monotonous curve, we sacrifice a little PSNR; i.e., when , takes 11. Therefore, the fitted curve of and is shown in Figure 4(b). Simultaneously the relationship between and can be expressed asConsidering that the number of iteration is an integer, the relationship (7) can be adjusted as follows:

Based on the above mentioned function (8), we can obtain the optimal number of iteration when the noise level of noisy image is given. It can be seen that the more severe the noise, the more the iterations in DDF. Intuitively, this is primarily due to the fact that in DDF the result for each iteration is the guided image of the next iteration. As the number of iteration increases, the result of DDF is getting closer to the clean image.

In our ADDF, the method of determining the optimal confidence factors for noise estimation is essentially the regression of data. Our ADDF operates on the block corresponding to each pixel, and the results obtained on the other images have not changed much, so we believe that the optimal confidence factors obtained by regression are stable. This also indicates the learning algorithm of the optimal confidence factors proposed in this paper has good robustness.

Since we proposes a model-based approach, it is possible to have a strong generalization ability by learning a small number of parameters with a small amount of training data. This is different from CNN, where a lot of data are needed to learn a lot of parameters. This view will be validated in Gaussian denoising experiments in the next section.

3. Proposed Algorithm and Experiments for Gaussian Denoising

3.1. Proposed Algorithm

The existing image denoising methods [5, 28, 31, 4043] all assume that noise standard deviation is known; i.e., the estimation of is another issue [44, 45]. Therefore, from the previous learning, when in Gaussian denoising is given, we can get the optimal confidence power and the optimal iteration number . Then the optimal confidence factors and for v estimation can be calculated. The proposed learning of the optimal confidence factors in Gaussian noise removal is summarized in Algorithm 3.

Input noise standard deviation
1: Calculate the optimal confidence power
2: Calculate the optimal iteration number
3:Initialize  
4:for   down to 1 do
5: Calculate the optimal confidence factors
6: end for
Output The optimal confidence factors and

Since the optimal confidence factors are obtained, we can robustly estimate noise in the spatial and frequency domains. The improvement of the noise estimation accuracy in DDF promotes the denoising effect. For clarity, here we take a few notes:

(1) When the iteration number is fixed , we learn the optimal confidence power .

(2) Based on the learned optimal , we learn the optimal iteration number

(3) The optimal is robust to ; i.e., is not very sensitive to , and in fact is mainly determined by . In addition, the optimal contributes principally to the improvement of denoising performance. Thus, and above are updated only once.

In addition, in the training phase we use the three clean images shown in Figure 2 to learn the optimal confidence factors for noise estimations in the spatial and frequency domains. In these three images, images (a) and (b) are employed to learn the optimal confidence power , while images (a) and (c) are utilized to learn the optimum iteration number . The optimal and here constitute the optimal confidence factors .

In summary, the noises are estimated in the spatial and frequency domains by utilizing the learned optimal confidence factors which vary adaptively with noise standard deviation. This image denoising method by adaptive noise estimation in the two domains is called adaptive dual-domain filtering (ADDF).

3.2. Experiments of Gaussian Denoising

Without loss of generality, in Gaussian denoising experiments, we use 9 color test images from two different test datasets. One is the test dataset LIVE1 [15] containing 29 natural images, and the other one Set14 [16] includes 12 images. Note that all these images are widely utilized for image quality assessment.

We implement dual-domain filtering with the learned optimal confidence factors on 9 color pictures for quantitative comparison with the similar state-of-the-art methods BM3D [5], DDID [9], and DDF [27] that are the standards for model-based denoising methods. It should be noted that, in [27], DDF has compared with the related denoising methods, so our ADDF only compares with these three methods. We consider five noise levels, i.e., , and 100. The codes of the three competing methods are downloaded from the authors’ websites and the default parameter settings are used in our experiments.

The experimental results are shown in Table 4. As can be seen from Table 4, our ADDF is robust to noise variance. The proposed ADDF has higher PSNR than BM3D, DDID, and DDF. Moreover, as the noise standard deviation increases, the PSNR gain of ADDF is widening.

To further verify the visual performance of the proposed method ADDF, on the one hand we apply it to four images with various noise standard deviations Figure 5 compares our new denoiser against other similar state-of-the-art denoising methods such as BM3D, DDID, and DDF. ADDF produces the cleanest and smoothest results. This robust noise estimator makes the restored image more consistent with our visual effects.

On the other hand, in Figure 6, examples of Gaussian denoising by the four methods are presented with on image Kodak23 that comes from the test dataset LIVE1 [15]. In order to reflect the difference, we take the local details and zoom in to show. One can see that ADDF can well enhance the image detail features such as edges and textures, which are often oversmoothed by other denoising methods. It should be noted that as the training images, the three natural images shown in Figure 2 have almost no textures. However, with the test image Kodak 23 which has more textures, ADDF has increased PSNR by 0.29dB over DDF.

Finally, Figure 7 shows a comparison of probabilistic density curves among the Ground Truth and the noises estimated by DDF, ADDF. From the partial enlarged illustration in Figure 7, we can find that, in the vicinity of the noise mean, where the probability density of the noise is the largest, the probability density curve of the noise estimated by ADDF is closer to the one of the true Gaussian noise. This implies that ADDF is more accurate than DDF in terms of noise estimation. Of course, this is mainly due to the fact that ADDF employs the confidence factors changing with the noise standard deviation for noise estimations in spatial and frequency domains, while DDF utilizes the fixed confidence factors.

These experiments above demonstrate that the advent of the optimal confidence power and iteration number has effectively contributed to significant improvements over the previous state-of-the-art works. Therefore, we believe that the learned optimal confidence factors in noise estimation are the critical contributors of our proposed method.

4. Generalization to Laplace Denoising

From the previous section, one can see that, to a large extent, the proposed confidence factors can change the distribution of the estimated noise, which can be illustrated in Figure 7. At the same time, we note that Laplace noise has many important applications in the industrial field. Therefore, we are motivated to generalize the proposed ADDF for Gaussian Denoising to the removal of Laplace noise.

Here the degraded image is given by , where is additive Laplace noise. In this section, we concentrate on the assumption that obeys Laplace distribution [4649]; i.e., , and its probability density function (PDF) is defined aswhere represents the location parameter and is equal to the mean, while is related to the variance as .

4.1. Learning the Optimal Confidence Factors

Here, it is assumed in this work that the mean of the additive Laplace noise is zero, and the standard deviation is known. Hence, the noise obeys such a Laplace distribution . Similar to the learning procedure of the optimal and in the confidence factors (3) and (4) for Gaussian denoising, the relationship between the optimal confidence power and noise standard deviation in Laplace denoising can be obtained below, and the fitted curve is shown in Figure 8(a),

The reason we take the sine functions is that when the noise standard deviation is large, the fitted curve can well approach the original data. Similarly, the expression between the optimal iteration number N and noise standard deviation is the following, and Figure 8(b) shows the fitted curve:

Based on the above learning results, as the standard deviation of Laplace noise is given, we can get the optimal confidence power and iteration number, so as to obtain the optimal confidence factors in Laplace noise estimation. We summarize them as Algorithm 4.

Input noise standard deviation
1: Calculate the optimal confidence power
2: Calculate the optimal iteration number
3:Initialize  
4:for   down to 1 do
5: Calculate the optimal confidence factors
6: end for
Output The optimal confidence factors and
4.2. Experiments of Laplace Denoising

In order to further test the capacity of the proposed ADDF for Laplace denoising, we also consider five noise levels, i.e., , and 100. We implement ADDF on 9 test images that are from the three different test datasets LIVE1 [15], Set14 [16], and Set5 [17]. Table 5 lists the comparison of our proposed ADDF and DDF in Laplace noise removal. As can be seen from all the examples, in terms of PSNR, our learning method leads to significant improvements over DDF. Moreover, compared with DDF, with the increase of , PSNR gain in ADDF generally continues to expand. Figure 9 shows the visual effect of ADDF on the image “Butterfly” that is in the dataset Set5. One can see that when the Laplace denoising experiment is applied to the test image “Butterfly” that is textured, ADDF in terms of PSNR achieves an increase of 0.23 dB over DDF. All of these indicate that our ADDF is robust to the images of the different datasets.

In order to further validate our thesis, we carry out an experiment based on a synthesized image with three people playing basketball. This image is collected from the internet and the noise level is . The results are shown in Figure 10, where one can see that our approach ADDF can obtain better visual and quantitative advantages. From the above experimental analysis, it is not difficult to see that compared with DDF, the noise estimated by the proposed ADDF is closer to the true Laplace noise.

Based on the three images shown in Figure 2, we have learned the optimal confidence factors for removing Gaussian and Laplace noise and obtained the ideal effects. We replace the three images in Figure 2 with the other images and learn the optimal confidence power and iteration number by the similar training methods. It can be found from the training results that the optimum confidence factors used for estimating noises in spatial and frequency domains are basically the same as the original ones, which indicates that the learning algorithm proposed in this paper has good robustness.

5. Image Deblurring Based on ADDF

5.1. Proposed Algorithm Plug-ADDF for Image Deblurring

Due to the ill-posed property of image restoration, the degraded image we observed is via , where is a blurring operator and is Gaussian or Laplace noise. To recover from , we need to solve the following optimization problem:where one minimizes the energy function that is composed of the data fidelity term , the regularization term , and the parameter . The data fidelity term guarantees that the solution conforms to the degradation process, and the regularization term performs desired attribute of the output.

Because of involving the inverse operator of , solving the problem (12) is sometimes difficult. With the proximal operator [50], the proximal forward-backward splitting (PFBS) approach was applied in [51]. The PFBS algorithm [50] applied to the problem (12) can be described aswhere is the step size and the proximal operator is defined as

The main advantage of this proximal operator is that the problem (14) is convex. Therefore, there must be a unique minimum solution for any . By utilizing an auxiliary variable , the solution to minimize the problem (12) can be obtained by the two-step iterative scheme,

Motivated by the idea of the Plug-and-Play ADMM [39], one can find that computing the proximal operator is actually a standard denoising problem which can be easily solved with any off-the-shelf denoising algorithm, denoted by the following :

Here we choose our proposed denoising approach ADDF, which indeed results in visual and quantitative improvements over similar methods. Then the solution of the problem (12) can be eventually obtained via the following alternately iterative algorithm:In this work we vividly name the proposed method for image deblurring as Plug-ADDF.

5.2. Experiments of Image Deblurring

As a common setting, we apply a blur kernel and add additive Gaussian or Laplace noise with standard deviation to synthesize the blurred images. Furthermore, we assume that the convolution is performed under circular boundary conditions.

5.2.1. Blurry Images Contaminated by Gaussian Noise

To make an evaluation, we employ a commonly-used Gaussian blur kernel with standard deviation 1.0. Then the Gaussian noise with is added to the blurry images. For the compared methods, we still choose the three model based methods BM3D, DDID, and DDF and nest these methods separately into the framework of image deblurring. Here we name the three deblurring methods Plug-BM3D, Plug-DDID, and Plug-DDF.

The subsequent issue we consider is the parameter settings. We note that there are two parameters step size and iterative number . In the gradient descent, we empirically set the step size . In our experiment, we need the iteration number to achieve satisfactory results.

It is worth noting that, in the iterations of the proposed Plug-ADDF, ADDF makes full use of the optimal confidence factors for Gaussian denoising in Algorithm 3. We believe that these adaptive optimal confidence factors corresponding to different noises are really suitable for denoising in image deblurring.

To verify the performance of the proposed Plug-ADDF for image deblurring, we apply it to ten test images, which are from the two datasets LIVE1 and Set5 and shown in Figure 11. The PSNR results for deblurring utilizing the four different methods are shown in Table 6. As one can see, the proposed Plug-ADDF method achieves promising PSNR results. Figure 12 illustrates deblurred image “Bird” by the four methods. We can see that Plug-BM3D, Plug-DDID, and Plug-DDF tend to smooth the edges and produce color artifacts. In contrast, the proposed Plug-ADDF can restore the clarity of the images.

5.2.2. Blurry Images Contaminated by Laplace Noise

This subsection focuses on the recovery problem of the blurred images contaminated by Laplace noise. First the basic parameter settings in the experiments are introduced. We utilize Gaussian blur kernel with standard deviation 0.5. Then the Laplace noise with is added to the blurry images. In the gradient descent, we choose a small step size . We implement the alternating iterative process in (17) for ten times. It should be emphasized here that in the iterations we implement Plug-ADDF with the optimal confidence factors of Laplace denoising from Algorithm 4 so as to obtain excellent experimental results of image deblurring.

Next, we briefly describe the experimental effects of image deblurring. The PSNR results of Plug-DDF and the proposed Plug-ADDF for image deblurring on the ten test images are shown in Table 7. Figure 13 illustrates deblurred images “Leaves” by the two methods. As one can see, our Plug-ADDF achieves promising deblurring results.

6. Single Image Super-Resolution Based on ADDF

6.1. Proposed Algorithm Plug-ADDF for Single Image Super-Resolution (SISR)

In this section, we improve the performance of SISR by subtly embedding the proposed ADDF into the SISR method framework. Let be the observed degraded image, where is a composite operator of blurring and downsampling and is Gaussian or Laplace noise. Our goal is to recover the original image from . The SISR problem can be formulated as follows:

Motivated by the SISR approach proposed in [52] that iteratively updates the back-projection [53] step and the image denoising step, we first utilize the following back-projection:where represents the degradation operator whose downscaling factor is , denotes bicubic interpolation operator whose upscaling factor is , and here is the step size.

Next, with regard to the denoising step in SISR, we also employ the Proximal operator [50] and the auxiliary variable to solve this problem. Hence, the problem (18) can be solved by the following iterative algorithm:

Finally, based on the fact that Plug-and-Play ADMM [39] supports any existing denoiser, we replace the solution of the proximal operator with the proposed denoising approach ADDF, which has an excellent effect on noise removal. Therefore, the proposed algorithm Plug-ADDF for SISR is the following two-step alternate iteration:

6.2. Experiments for Single Image Super-Resolution

In the experimental part of SISR, the images we deal with include two cases: LR blurry images contaminated by Gaussian noise and LR blurry ones contaminated by Laplace noise.

6.2.1. LR Blurry Images Contaminated by Gaussian Noise

In our SISR, the experimental low-resolution (LR) images are generated by firstly blurring a high-resolution (HR) image with a Gaussian kernel with standard deviation 0.2 and subsequently downsampling on the blurred image by a scaling factor 2. The additive Gaussian noise with standard deviation is also added to the LR blurry images, which makes the image restoration more challenging. In this paper, we consider using the Matlab function “imresize” to perform typical image degradation downsampling.

In our experiment, we repeatedly implement (21) eight times for SISR. We set the step size to 0.2. Here we use BM3D, DDID, and DDF as the competitive denoisers, which we also vividly term Plug-BM3D, Plug-DDID, and Plug-DDF. It is worth noting that, in the denoising step of our Plug-ADDF, the optimal confidence factors for Gaussian denoising in Algorithm 3 are employed.

Table 8 shows the PSNR results of four different methods for SISR on the ten test images from LIVE1 and Set5. We can see that the proposed Plug-ADDF is significantly superior to the other three competing methods in terms of PSNR. We show in Figure 14 an example to illustrate the SISR results by the four methods. One can see that the image restored by our approach Plug-ADDF is clearer and more image details are recovered. Such a promising result is attributed to the fact that the proposed Plug-ADDF skillfully takes advantage of adaptive dual-domain filtering during the iterations of SISR.

6.2.2. LR Blurry Images Contaminated by Laplace Noise

In this subsection, we mainly deal with the recovery problem of LR blurry images contaminated by Laplace noise. Firstly we introduce the parameter settings in the experiments. The experimental LR images are modeled by a Gaussian blurring kernel with standard deviation 0.3 and downsampling on the blurred image with a scaling factor 2. Subsequently the additive Laplace noise with standard deviation is added to the LR blurry images. In the gradient descent, we set a small step size , while in the denoising step we carry out ADDF with the optimal confidence factors of Laplace denoising in Algorithm 4. Utilizing these settings, we iteratively implement (21) eight times for SISR.

Secondly, we outline the experimental effects of SISR. Table 9 shows PSNR results of Plug-DDF and the proposed Plug-ADDF on the ten images that are from the datasets LIVE1 [15] and Set5 [17]. Figure 15 illustrates the SISR performance comparison for the image “Starfish.” We can find that, for SISR with Laplace noise, the proposed Plug-ADDF outperforms Plug-DDF both objectively and subjectively.

7. Conclusions

Different from the popular deep learning methods which usually require a large number of samples for a long time to learn vast parameters, in this paper the proposed ADDF, which is developed by simple learning on a small amount of samples, can effectively improve the performance of model-based methods. The model-based approach is still based on the traditional methods such as differential equation diffusion, TV regularization, and transform domain filtering. These methods estimate different original images from the noisy images mainly by means of image prior information such as local smoothness, nonlocal self-similarity, and sparsity. The core idea of these model-based approaches is that image priors rely on knowledge rather than on data learning. By learning the optimal confidence factors in the spatial and frequency domains, we present a robust estimation paradigm ADDF which is suitable for removing additive Gaussian and Laplace noise. One key procedure in the proposed ADDF scheme is to adaptively estimate noise according to different noise levels. DDID and DDF are simpler than BM3D to implement and yet rival BM3D in quality. It is gratifying our ADDF outperforms the similar state-of-the-art approaches BM3D, DDID, and DDF under almost no increase in the amount of computation. Moreover, we subtly embed ADDF into the classical image deblurring and SISR method framework. The experimental results confirm that our proposed Plug-ADDF method produces a satisfactory image recovery effect.

With respect to the computational cost, we here mainly compare ADDF with DDF. Specifically, we select ten noisy color images with for denoising and then take the average time for comparison. One can see that ADDF (121s) has almost the same computing time as DDF (120s) but can result in better image recovery performance than DDF. Therefore, compared with DDF, the proposed method does not increase the computational cost. We implement the experiments on one PC with two Intel CPUs (2.10 GHz and 2.10GHz) and 64.0 GB RAM in the Matlab R2017b operating environment.

We here make a comprehensive comparison of the proposed ADDF with a typical CNN-based method DnCNN [28]. DnCNN needs to utilize 400 images to learn more than 600,000 parameters. For such high computational complexity, it takes more than one day to train the required neural network. Moreover, such training generally has to be accomplished by GPU. This is because the implementation of back propagation (BP) in DnCNN requires a great price. Although DnCNN has good image recovery performance, this method relies heavily on data and cannot systematically understand the learned data. In contrast, our proposed ADDF uses only three images to learn two parameters with moderate computation cost. Specifically, in ADDF it takes only 12 seconds on CPU by regression method to get the optimal confidence power and the optimum iteration number, which constitute the optimal confidence factors. It can be found that ADDF is much faster than DnCNN in terms of time-cost of training. Besides, it is surprising that our proposed ADDF has a strong generalization ability, mainly because ADDF is a model-based image restoration method. The model itself is determined by the physical properties of the image and noise.

In terms of image restoration performance, the proposed method is inferior to the large sample learning method such as DnCNN, which is the main shortcoming of the current model-based approach. Thus, one of our future research directions is to further enhance the effect of image restoration under the small sample condition, for example, for medical high-dose images. On the other hand, under the condition of large sample, the deep learning method can be used to solve the new model such as the improvement of the trainable nonlinear reaction diffusion [40], to further improve the image restoration effect.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors state that there are no conflicts of interest about the publication of this paper.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (Grants 61472303, 61472257, and 61271294), the Fundamental Research Funds for the Central Universities (Grant NSIY21), and the HD Video R & D Platform for Intelligent Analysis and Processing in Guangdong Engineering Technology Research Centre of Colleges and Universities under Grant GCZX-A1409.