Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article
Special Issue

Advancements in Mathematical Methods for Pattern Recognition and its Applications

View this Special Issue

Research Article | Open Access

Volume 2019 |Article ID 8365932 | https://doi.org/10.1155/2019/8365932

Zhuang Fang, Xuming Yi, Liming Tang, "An Adaptive Boosting Algorithm for Image Denoising", Mathematical Problems in Engineering, vol. 2019, Article ID 8365932, 14 pages, 2019. https://doi.org/10.1155/2019/8365932

An Adaptive Boosting Algorithm for Image Denoising

Academic Editor: Ezequiel López-Rubio
Received04 Jul 2018
Revised14 Jan 2019
Accepted03 Feb 2019
Published18 Feb 2019

Abstract

Image denoising is an important problem in many fields of image processing. Boosting algorithm attracts extensive attention in recent years, which provides a general framework by strengthening the original noisy image. In such framework, many classical existing denoising algorithms can improve the denoising performance. However, the boosting step is fixed or nonadaptive; i.e., the noise level in iteration steps is set to be a constant. In this work, we propose a noise level estimation algorithm by combining the overestimation and underestimation results. Based on this, we further propose an adaptive boosting algorithm that excludes intricate parameter configuration. Moreover, we prove the convergence of the proposed algorithm. Experimental results that are obtained in this paper demonstrate the effectiveness of the proposed adaptive boosting algorithm. In addition, compared with the classical boosting algorithm, the proposed algorithm can get better performance in terms of visual quality and peak signal-to-noise ratio (PSNR).

1. Introduction

Image denoising is a fundamental problem in image processing, computer vision, pattern recognition, and so on. Consider a noisy image modeled aswhere is a clean image and is added white Gaussian noise with zero-mean and standard deviation . The goal of denoising is to resolve the clean image in (1). To this end, many technique and methods, such as spatial adaptive filters, transform-domain methods, sparse representation, and processing of local patches have been explored to study this problem, which leads to state-of-the-art denoising algorithms (denoisers), including the NLM [1], K-SVD [2], EPLL [3], BM3D [4], BM3D-SAPCA [5], and LSSC [6].

Although the algorithms mentioned above are effective in denoising application, the performance can be improved by employing some specific techniques, such as “Twicing”, Bregman iteration, and SAIF. In this paper, we focus on boosting skills originally used in machine learning, which improve denoising performance by multiple reusing [710] of weak denoisers to achieve strong ones. A general boosting algorithm [11] for image denoising can be expressed aswhere is a denoiser and is the denoised image of the th iteration. Essentially, the boosting algorithm (2) is to repeatedly denoise the residuals, , and add them back to the denoised image. However, Talebi et al. [10] pointed out that the number of iterations must be tuned carefully since the sequence obtained by (2) is not always convergent to the best restoration.

To conquer this problem, Romano et al. [7] proposed another boosting algorithm, called SOS, which can be expressed asSOS strengthens the noisy image by adding portion of the denoised image to the noisy image before operating the next iteration denoiser and subtracting the same portion of the outcome. SOS has excellent performance in denoising application. In addition, it is guaranteed to converge to an optimal solution, which enables us to obtain stopping criteria easily.

We note that the denoisers in (2) and (3) are invariant operators. In other words, the parameter in the denoisers is fixed regardless of the noise levels in the different iteration steps. The authors in [7] predefine a constant to estimate the initial noise level of and then set with to estimate the noise level of . In the following weaker denoising steps, the constant parameter is used in denoisers . Obviously, such scheme is not precise since s are the progressively restoring images which lead to a decreasing noise level estimation of with increasing. To solve this problem, in this paper we propose an adaptive boosting algorithm for image denoising application. Adaptive denoisers are used in boosting rather than an invariant operator , where the parameter is a noise level estimator of .

The estimation of the noise level plays a key role in our adaptive boosting algorithm. In the last decades, many methods on noise level estimation have been proposed [1220]. Among these methods, the blind evaluation of noise level in textured images was widely studied [15, 16], which leads to many noise level estimation methods that were wildly used in processing the highly textured images. Different from the methods that were reported in [15, 16], in this paper, we focus on patch-based methods which have received a lot of attention due to the sound theoretical basis and promise performance, such as PCA [17], WT [18], and others [19, 20]. Chen et al. [19] pointed out that PCA and WT methods could lead to an underestimation of the noise level by using the Blom’s theorem [21]. Jiang et al. [20] proposed a noise level estimation method (called JZ in the following) based on the eigenvalues of covariance matrix of the flat patches. We in this paper prove that JZ method is an overestimation of the noise level. Combining WT underestimation and JZ overestimation, we propose a new estimator to obtain higher accurate estimation. The main contributions of this paper are as follows:(i)We propose an adaptive boosting algorithm for image denoising application, in which an adaptive weaker denoiser is introduced rather than an invariant operator often used in the traditional boosting.(ii)We prove that JZ method is an overestimation of the noise level by using Blom’s theorem. In addition, combining WT underestimation and JZ overestimation, we propose a new estimator to obtain higher accurate estimation.(iii)We prove the convergence of the proposed adaptive boosting algorithm. And some experiments are conducted to validate the proposed algorithm.

This paper is organized as follows: in Section 2, we present some related works concerning noise level estimation and image quality assessment. In Section 3, a noise level estimation algorithm and an adaptive boosting algorithm for image denoising application are proposed. In Section 4, the experiment procedure and the trial comparison between the proposed method and the initial denoising algorithm are described in detail. We summarize, conclude, and discuss the directions of future research in Section 5.

2.1. Eigenvalues and Noise Level

Liu et al. [18] gave a noise level estimation algorithm based on weak textured patches. According to their algorithm, the noise level of the image is estimated as follows:where is the ith eigenvalues of , in which is a set of weak textured patches that are selected by Algorithm 1 that was reported in [18], and is the covariance matrix of . By selecting flat patches in noisy image, Jiang et al. [20] gave another noise level estimation as follows:where denotes the ith eigenvalues of and is the covariance matrix of . In. (4) and (5), is the element number of each patch. The basis of the above two methods is to extract a set of patches with Gaussian distribution. Chen et al. [19] proved the following theorems about the distribution of the eigenvalues.

Lemma 1 (see [19]). Given a set of random variables , with each element following Gaussian distribution independently, the distribution of the noise estimation converges to the distribution when becomes sufficiently large.

Lemma 2 (see [19]). Let denote the cumulative distribution function of a standard Gaussian distribution. Given that independent random variables generated from the normal distribution with order , then the expected value of can be approximated by with .

2.2. Natural Image Quality Evaluator (NIQE)

The denoising algorithm is usually an iterative process in which the number of iterations needs to be selected such that the denoised image can achieve the best visual effect. For this problem, no-reference/blind image quality assessment models [2226] are introduced. Recently, Mittal et al. [25] proposed a blind image quality assessment model called NIQE. In this method, the quality of a given image is expressed as follows:where , and , are the mean vectors and covariance matrices of the natural MVG model and the distorted image’s MVG model. Compared with other methods, NIQE does not have to train on large databases of human opinions of distorted images and has low computation complexity. Thus, it is suited for determining the optimal numbers of iterations. More details about NIQE can be found in [25].

3. The Proposed Method

In this section, we discuss the theoretical foundation of the noise level estimation algorithm and give an estimation that linearly combines the overestimated and underestimated results to evaluate the noise levels. Then an adaptive boosting algorithm for image denoising is proposed. The details of these steps are presented in the following subsections.

3.1. Noise Level Estimation

Using Lemmas 1 and 2 we can establish the following result.

Corollary 3. The estimation of in (4) satisfies . Particularly, if , then , where

Proof. If , then . By adding to both sides, we have ; that is, . Then, we have . Thus , with and . In (4), the last eigenvalue is selected as the noise level; it follows that . From Lemma 2, the following relationship holds:Thus .
Note that . Then ; that is, . Because is a monotonic function, we can obtain . Then, . Finally, we multiply both sides of the above equation by and simplify it to obtain .

Theorem 4. If , then noise level can be estimated bywhere and .

Proof. From (7) and (8), we obtain the following:Let , . Then, (10) becomesTherefore, can be solved by (11), that is,The collection of weak textured patches and the flat patches may not be the same. We extract the flat patches from the weak textured patches and denote it by . Then, both (7) and (8) hold on the collection , and the proposed algorithm for noise level estimation can be described in Algorithm 1. In experiment, we set to ensure the efficiency of Algorithm 1 and obtain more accurate noise level. The results of the corresponding proof experiments can be seen in Section 4.1. In the following section, Algorithm 1 will be plugged in a new adaptive boosting algorithm.

Input: Noisy image
Output Noise level .
Initialize Estimate the initial noise level , maximum number of iterations .
While  do
 1: ;
 2: Using Algorthm 1 in [18] to select weak textured patches .
 3: For , use Algorithm 1 in [20] to select flat patches .
 4: For all the patches in , calculate using Eq. (7).
 5: For all the patches in , calculate using Eq. (8).
 6: Calculate using Eq. (9).
End while
Return Noise level .
3.2. Adaptive Boosting Algorithm

Based on the analysis for denoiser in (3), the noise level is the main parameter in the iteration. We show that the denoiser can be improved by the following boosting procedure:

Strengthen the signal by accumulating the previous denoised image to the noisy input image.

Estimate the noise level of the strengthen image.

Project the strengthen image to the range less than 1; i.e., divide the strengthen image by its infinite norm (maximum).

Operate the denoiser on the project image and back-project the range of the outcome to match the clean image .

The main equations that describe the above procedure can be written aswhere the infinite norm represents the maximum value of all elements, is the noise level of , the parameter controls the strength of the denoiser, and controls the signal emphasis. Our full image denoising algorithm is given in Algorithm 2.

Input: Noisy image , denoising operator .
Output: An estimate for
Initialize: , , , .
Main Iteration: Increment by 1 and perform the following steps
 1. .
 2. the noise level of and it is estimated by Algorithm 1.
 3. .
 4. .
Stopping Rule: If , stop. Otherwise, apply main iteration.
Return: is obtained after iterations.
3.3. Convergence Analysis of Algorithm 2

In this section, bounded denoiser and linear denoiser are introduced and some proposition are analysed, then the convergence of Algorithm 2 is proved.

Definition 5 ((bounded denoiser) [27]). A bounded denoiser with a parameter is a function such that for any input ,for some universal constant independent of and .

Definition 6 (linear denoiser). For the given constants , , , the denoiser is a linear operator, if

Milanfar [8] pointed out that many denoisers satisfy Definitions 5 and 6. Based on this, Romano et al. [7] proved the convergence of SOS algorithm by neglecting the nonlinear term. According to these definitions and the following properties, we prove the convergence of Algorithm 2.

Proposition 7. For the given and , .

Proof. Since is a bounded denoiser, we have .

In iterative equation (13), is the th iterative approximation of . Denote the error between and by ; that is, . Then it follows from the iterative equation thatSubstituting and into (16), we can getDenote . And let . Using the central limit theorem, is a normal random variable with . Furthermore, we denote the standard deviation of by . This implies that in (17) can be considered as clean portion and a zero-mean Gaussian white noise with standard deviation . Thus, the noise level of estimated by Algorithm 1 is bounded. Then, we have the following proposition.

Proposition 8. There exists a constant satisfying .

Proposition 9. For all , and .

Proof. Since , we have , for . Then .
Assume . Then . Furthermore, noting that , we can getTherefore, the mathematical inductive method enables us to get .
Since , we have .

Proposition 10. For any positive integer , the following inequalities hold:

Proof. For any given , , it holds that . Therefore, for all noise level , we haveNoting thatand employing (20) and Proposition 9, we haveUsing the iteration , we have . ThenSubstituting (22) into (23), we getSince , then we haveThis completes the proof.

Next, we further consider the convergence of Algorithm 2.

Theorem 11. If is bounded and linear denoiser, then the main iterations of Algorithm 2 are convergent; that is, as .

Proof. It is easy to obtainFurther applying Definition 5 and substituting (22) into (26), we haveSimilarly,Applying Definition 5 and substituting (25) into (28), we obtainOn the other hand, the iteration leads toSubstituting (27) and (29) into (30), we haveFinally, Applying Proposition 7 and substituting (31) into (32), we obtainFrom Proposition 8 both and are all bounded. And since and are finite, we can get as , by taking the limit on both sides of (33).

3.4. Parameters Configuration

In Algorithm 2, we select the state-of-the-art algorithms: BM3D and BM3D-SAPCA. The source code of these algorithms can be obtained from the original authors and we use the default parameters. When the denoiser is given, the parameters in Algorithm 2 are and . In the remainder of this section, we mainly discuss the influence of and on the final results.

Considering and and discreting by using a step-size , we get the parameters set . Then, we introduce these parameters into the proposed Algorithm 2 and apply the proposed algorithm to noisy image. Figure 1(a) shows the relationship between and PSNR defined as , where MSE is the mean squared error between the original image and its denoised version. The relationship between and optimizing truncation iteration number is shown in Figure 1(b). It can be seen from Figure 1, when the parameters and are small, we can get the smallest PSNR and the largest iteration number. In this case, the efficiency of the denoising is the lowest. When both and achieve the highest value, the number of the iteration is the minimum, but the PSNR is not the maximum value. It is obvious that the PSNR has some volatility with . The iteration number of the area corresponding to the optimal PSNR value maintains consistency. In order to get the best denoising performance and reduce the computation complexity, suitable parameters and must be determined. According to the results that are present in Figure 1, the PSNR is highest and the number of iterations is acceptable when and . Therefore, we select (0.635, 0.472) in the following experiment.

4. Results

4.1. Noise Level Estimation Results

Usually, the value of Bias, Std, and RMSE are used to evaluate the noise estimation performance. We use well-known criteria: accuracy, reliability, and overall performance which have been considered in most literatures. In detail, and

We test our method on Tampere image datasets (TID2008) [28] which contains 25 images of size . All images in this datasets are disturbed by zero-mean Gaussian noise with different standard deviations = 5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, and 100. According to Figure 3(a), the estimated results are almost as same as the true noise level. As shown in Figure 3(b), the value of Bias, Std, and RMSE are all close to zero, which illustrates the accuracy of the Algorithm 1.

4.2. Denoising Results
4.2.1. Objective Measurement Results

In this section, we provide detail results of the proposed methods. Based on the completely blind image quality assessment method NIQE, we always consider the following stopping rule: , and . The final outcome of Algorithm 2 is . We evaluate the competing methods on standard image processing test dataset (SIPTD) including Forman, Lena, House, Fingerprint, and Peppers, whose scenes are shown in Figure 2. These images are extensively tested in image processing. Thus it is convenient to compare our method with other methods under the same condition. All the test images are corrupted by an additive zero-mean Gaussian noise with a variance . The denoising performance is evaluated by using the PSNR, structural similarity (SSIM) [29], and dissimilar index (DSI) [30]. All results obtained by competing denoising method are shown in Tables 1 and 2. The results that appear in the column are obtained by applying the BM3D or BM3D-SAPCA on using the accurate noise standard deviation. The results that appear in the SOS and “OURS” columns are obtained by applying SOS that was reported [7] and the proposed Algorithm 2, respectively. The best results for each image and noise level are highlighted. In contrast, Algorithm 2 achieves higher PSNR and SSIM than those of the other schemes. Compared with BM3D, PSNR obtained by Algorithm 2 achieves about 0.45dB improvement on the image Peppers with the noise level 50. The results presented in Table 3 indicate that the proposed approach achieves the highest average improvement of the PSNR and SSIM among all compared methods. Hence, for SIPTD dataset, the proposed algorithm is effective.


BM3D(, )

Forman Lena House
SOS OURS SOS OURS SOS OURS

10 37.24 37.24 37.24 35.87 35.87 35.87 36.65 36.65 36.65
0.9375 0.9368 0.9368 0.91490.9141 0.9154 0.91870.9163 0.9220
2.1859 2.34861.7016 3.1644 3.35212.8190 2.7045 2.89832.0538

20 34.43 34.4334.50 32.98 32.9933.01 33.6733.69 33.67
0.90610.9085 0.9067 0.8764 0.87510.8771 0.8684 0.8657 0.8663
3.3650 3.7503 3.60734.7598 5.2280 5.01524.4826 4.9408 5.0619

25 33.44 33.4633.60 32.02 32.0432.07 32.79 32.8032.85
0.8922 0.89780.8983 0.8599 0.85910.8619 0.8566 0.8555 0.8558
3.6520 4.2599 4.32835.4374 6.1221 6.22374.96775.5707 5.9420

50 30.04 30.1330.18 28.78 28.8129.04 29.3829.42 29.79
0.8310 0.84530.8454 0.7860 0.79260.8014 0.7998 0.80570.8159
4.7899 5.3656 6.616610.191 10.757 11.8677.7156 8.3724 9.3492

Fingerprint Pepper Average
SOS OURS SOS OURS SOS OURS

10 32.46 32.4632.51 34.5834.64 34.59 35.3635.37 35.37
0.9690 0.9690 0.96820.9274 0.9267 0.9264 0.9336 0.93260.9338
0.7034 0.72050.4263 2.5305 2.6643 2.0317 2.2577 2.39681.8065

20 28.81 28.8128.82 31.2331.2731.27 32.22 32.2432.25
0.9305 0.92810.9308 0.8849 0.88550.8859 0.8933 0.89260.8934
6.2994 6.48515.3902 5.7494 6.10464.8925 4.93125.30184.7934

2527.70 27.70 27.69 30.15 30.1630.19 31.22 31.2331.28
0.9121 0.90720.9122 0.8667 0.86780.8681 0.8775 0.87750.8793
10.310 10.65710.260 7.6232 8.13117.00776.3980 6.9482 6.7523

50 24.34 24.3624.53 26.32 26.3826.8327.77 27.82 28.07
0.82530.8143 0.83670.7700 0.77970.7952 0.8024 0.80750.8189
35.098 34.95527.840 18.296 18.52118.802 15.218 15.59414.895


BM3D-SAPCA(; )

Forman Lena House
SOS OURS SOS OURS SOS OURS

10 37.52 37.5437.54 36.0236.05 36.0237.01 36.9537.01
0.9396 0.93970.9398 0.9168 0.9168 0.9166 0.9274 0.92680.9279
1.84362.00521.51713.02063.2788 3.22772.25782.47282.2514

20 34.62 34.6234.68 33.1933.23 33.19 33.90 33.9133.92
0.90540.9060 0.9055 0.8796 0.88040.8808 0.8727 0.87190.8735
3.4477 3.8614 3.80844.9624 5.5129 5.50654.6415 5.0781 5.0674

25 33.69 33.7733.81 32.22 32.2232.23 32.96 32.9233.02
0.8917 0.89350.8935 0.8644 0.86560.8659 0.8588 0.8581 0.8585
3.8153 4.3223 4.51215.9266 6.5503 6.83465.4209 6.0023 6.1307

50 30.32 30.5030.71 29.05 29.0929.26 29.53 29.6329.88
0.8402 0.84300.8497 0.8022 0.80530.8080 0.8045 0.80950.8162
5.2345 5.7885 7.33699.6687 10.7764 13.6531 7.5772 7.9921 9.72155

Fingerprint Peppers Average
SOS OURS SOS OURS SOS OURS

10 32.64 32.6632.69 34.94 34.9534.96 35.63 35.6335.64
0.9703 0.97040.9706 0.92840.9284 0.9280 0.9365 0.93640.9366
0.75030.77950.85022.2488 2.4581 3.29002.0242 2.1989 2.2273

20 28.94 28.9629.00 31.55 31.5631.59 32.44 32.4632.48
0.9328 0.93290.9329 0.8868 0.88750.8884 0.8955 0.89570.8962
6.8078 7.1170 7.40055.3362 5.9814 5.94195.0391 5.5102 5.5449

25 27.81 27.8327.86 30.43 30.4430.49 31.42 31.4431.48
0.91450.9146 0.9135 0.8692 0.86990.8706 0.8797 0.88030.8804
11.629 12.163 13.88007.2262 7.8891 8.05916.8035 7.3855 7.8833

50 24.5324.55 24.48 27.00 27.0527.06 28.09 28.1628.28
0.83540.8360 0.82860.7945 0.79820.8005 0.8154 0.81840.8206
36.14339.1979 52.020016.5315 17.233320.496115.031 16.198 20.646


BM3DBM3D-SAPCA

PSNR PSNR

1020255010202550

SOS0.010.010.010.050.010.020.020.07

Ours0.010.030.060.300.270.220.010.04

SSIMSSIM

SOS0.00050.00020.00090.00510.00010.00030.00060.0031

Ours0.00070.00030.00180.01650.02020.01930.00010.0008

DSIDSI

SOS0.13900.37050.55020.37590.17470.47100.58191.1667

Ours-0.4513-0.13790.3543-0.32340.20310.50581.07975.6146

Moreover, we perform Algorithm 2 on all the grass images from the MeasTex texture dataset [31] which contain rich texture features. The comparison results are presented in Table 4. Compared with SOS methods, the average improvements of PSNR and SSIM demonstrate significant comparative advantages for almost all noise levels. It is clear that the proposed algorithm offers better restoration of texture images.


BM3DBM3D-SAPCA

PSNRPSNR

1020255010202550

SOS0.010.010.010.030.010.020.020.07

Ours0.010.020.050.240.010.040.060.19

SSIMSSIM

SOS0.00050.00020.00090.00510.00010.00030.00060.0031

Ours0.00070.00030.00180.01650.00010.00080.00070.0052

DSIDSI

SOS0.13900.37050.55020.37590.03930.38720.65421.2431

Ours-0.4513-0.13790.3543-0.3234-0.1778-0.4219-0.11814.9756

Finally, we use DSI (the Matlab realization of the DSI metric are available at http://ponomarenko.info/flt.htm.) which obtains the largest spearman rank order correlation coefficient values to mean opinion scores to evaluate the visual quality of the denoised images. According to the results present in Tables 3 and 4, the proposed Algorithm 2 obtained lower average DSI value for each denoiser on the considering datasets in most case.

4.2.2. Detail Contrast

In this section, we compare the performance of considered denoising algorithms in preserving image details. Figure 5 shows the fragments of noisy () Baboon image and the corresponding images denoised by different denoisers. The first row is the original image and its noisy version. The second row shows the denoised result corresponding to different parameters: exact , SOS, and by using the denoiser BM3D. The third row is the denoised result obtained by the denoiser BM3D-SAPCA. The enlarge fragments in each subfigure are helpful for demonstrating the good quality of the denoised images of faithful detail preservation. Figures 5(f) and 5(i) show more details close to the original image.

Figure 6 shows the fragments of noisy () fingerprint image and the corresponding images denoised by different denoisers. According to Figure 6, our approach outperforms BM3D, SOS, and BM3D-SAPCA, numerically and visually.

Fragments of noisy () and denoised Foreman are shown in Figure 7. For this relatively high level of noise, the proposed Algorithm 2 attains a good preservation of sharp details, such as the lines on the wall in Foreman image. Meanwhile the smooth regions, such as the hat in Foreman image, are also of a good preservation. All the denoised images obtained by using the proposed Algorithm 2 have the fewest disturbing artifacts.

4.2.3. Computational Time

We discuss the efficiency of the proposed algorithm in this section. SOS and Algorithm 2 in this paper need several iterations during the denoising application because these two algorithms may cost more time than the initial denoiser. Table 5 shows the results of computational time between SOS and Algorithm 2. Compared with SOS algorithm, Algorithm 2 has a certain advantage in processing time for higher noise level while BM3D is selected as the denoiser. According to the results presented in Table 5, the processing time of Algorithm 2 in this paper is lower than SOS for each noise level when the denoiser BM3D-SAPCA is used. Furthermore, the average processing time spent per image of Algorithm 2 is less than that of SOS.


SIPID dataset MeasTex-grass dataset

BM3D BM3D

10 20 25 50 Ave. 10 20 25 50 Ave.

SOS8.47 8.95 9.05 14.66 11.4315.53 17.22 17.87 29.56 22.13

OURS 9.92 10.73 11.1110.89 10.10 15.92 19.62 20.8222.6819.16

BM3D-SAPCA BM3D-SAPCA

SOS 181.3 175.3 169.9 253.5 202.4 234.8 310.3 285.4 456.3 363.2

OURS118.0 112.7 110.8 133.0 114.3 268.5242.9228.8290.6248.6

5. Conclusion and Discussion

In this paper, we proposed a new adaptive boosting denoising algorithm by plugging an accurate noise level estimation. Experience shows that the algorithm can improve performance of denoising algorithm which depended on noise level and can also keep the image’s edges and detail information well. Though it is a convenient tool for improving various denoising algorithm, there are still serval directions that we are interested in future works. Firstly, in order to get optimal output image, we use NIQE algorithm to assess the quality of output image . When are no longer decreasing, the final result of Algorithm 2 is given by . Figure 4(a) shows the result; the highest PSNR is obtained when first increase. By using Theorem 11, a straightforward stopping criterion ( is sufficiently small) can be obtained. Although this straightforward stopping criterion avoid the output of Algorithm 2 back to , the PSNR does not always increase when decrease. Figure 4(b) shows the relationship between PSNR and with ; the smallest does not give the optimal PSNR. Compared with Figure 4(a) and 4(b), we can find that the stopping rule in Algorithm 2 is helpful for determining the optimal numbers of iterations. Secondly, there are two main parameters and in the boosting method. How to select these parameters for optimal denoising results needs further research. Finally, we hope that other image restoration problems, such as image deblurring/inpainting, can use similar embedding method.

Data Availability

It should be noted that a software release of the proposed algorithm in our manuscript is available online: https://ww2.mathworks.cn/matlabcentral/fileexchange/67924-abd.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 11671307, 61561019, 61763009, and 11761030 and by Doctoral Scientific Fund Project of Hubei University for Nationalities under Grant No. MY2015B001.

References

  1. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” SIAM Journal on Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at: Publisher Site | Google Scholar
  2. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006. View at: Publisher Site | Google Scholar
  3. D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), vol. 6669, pp. 479–486, November 2011. View at: Publisher Site | Google Scholar
  4. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at: Publisher Site | Google Scholar
  5. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Bm3d image denoising with shape-adaptive principal component analysis,” Proc Workshop on Signal Processing with Adaptive Sparse Structured Representation Saint-malo, 2009. View at: Google Scholar
  6. J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2272–2279, October 2009. View at: Publisher Site | Google Scholar
  7. Y. Romano and M. Elad, “Boosting of image denoising algorithms,” SIAM Journal on Imaging Sciences, vol. 8, no. 2, pp. 1187–1219, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  8. P. Milanfar, “A tour of modern image filtering,” IEEE Signal Processing Magazine, vol. 30, no. 1, pp. 106–123, 2013. View at: Publisher Site | Google Scholar
  9. Y. Romano and M. Elad, “Improving K-SVD denoising by post-processing its method-noise,” in Proceedings of the 2013 20th IEEE International Conference on Image Processing, ICIP 2013, pp. 435–439, 2014. View at: Google Scholar
  10. X. Talebi, H. Zhu, and P. Milanfar, “How to saif-ly boost denoising performance,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1470–1485, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  11. M. R. Charest, M. Elad, and P. Milanfar, “A general iterative regularization framework for image denoising,” in Proceedings of the 2006 40th Annual Conference on Information Sciences and Systems, CISS 2006, pp. 452–457, March 2006. View at: Google Scholar
  12. N. P. Galatsanos and A. K. Katsaggelos, “Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation,” IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, vol. 1, no. 3, pp. 322–336, 1992. View at: Publisher Site | Google Scholar
  13. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  14. S. D. Chitte, S. Dasgupta, and Z. Ding, “Distance estimation from received signal strength under log-normal shadowing: Bias and variance,” IEEE Signal Processing Letters, vol. 16, no. 3, pp. 216–218, 2009. View at: Publisher Site | Google Scholar
  15. N. Ponomarenko, V. Lukin, S. Abramov, K. Egiazarian, and J. Astola, “Blind evaluation of additive noise variance in textured images by nonlinear processing of block DCT coefficients,” in Proceedings of the SPIE - The International Society for Optical Engineering, pp. 178–189, January 2003. View at: Google Scholar
  16. V. Lukin, S. Abramov, B. Vozel, and K. Chehdi, “A method for blind automatic evaluation of noise variance in images based on bootstrap and myriad operations,” Remote Sensing. International Society for Optics and Photonics, vol. 5982, pp. 299–310, 2005. View at: Google Scholar
  17. J. Pyatykh, S. Hesser, and L. Zheng, “Image noise level estimation by principal component analysis,” IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 687–699, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  18. M. Liu, X. Tanaka, and M. Okutomi, “Single-image noise level estimation for blind denoising,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 5226–5237, 2013. View at: Publisher Site | Google Scholar
  19. F. Chen, G. Zhu, and P. A. Heng, “An efficient statistical method for image noise level estimation,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 477–485, December 2015. View at: Google Scholar
  20. P. Jiang and J.-Z. Zhang, “Fast and reliable noise level estimation based on local statistic,” Pattern Recognition Letters, vol. 78, pp. 8–13, 2016. View at: Publisher Site | Google Scholar
  21. G. Blom, “Statistical estimates and transformed beta variables,” 1958. View at: Google Scholar
  22. C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” The Annals of Statistics, vol. 9, no. 6, pp. 1135–1151, 1981. View at: Publisher Site | Google Scholar | MathSciNet
  23. X. Zhu and P. Milanfar, “Automatic parameter selection for denoising algorithms using a no-reference measure of image content,” IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, vol. 19, no. 12, pp. 3116–3132, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  24. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  25. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely blind image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013. View at: Google Scholar
  26. Y. Li, S. Wang, C. Li, Z. Pan, and W. Zhang, “A fast color image segmentation approach using gdf with improved region-level ncut,” Mathematical Problems in Engineering, vol. 2018, no. 3, Article ID 8508294, 14 pages, 2018. View at: Publisher Site | Google Scholar
  27. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: fixed-point convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  28. N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, “Tid2008-a database for evaluation of full-reference visual quality assessment metrics,” Advances of Modern Radioelectronics, vol. 10, no. 4, pp. 30–45, 2009. View at: Google Scholar
  29. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at: Publisher Site | Google Scholar
  30. K. Egiazarian, M. Ponomarenko, V. Lukin, and O. Ieremeiem, “Statistical evaluation of visual quality metrics for image denoising,” 2017. View at: Google Scholar
  31. G. Smith and I. Burns, “Measuring texture classification algorithms,” Pattern Recognition Letters, vol. 18, no. 14, pp. 1495–1501, 1997. View at: Publisher Site | Google Scholar

Copyright © 2019 Zhuang Fang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1149
Downloads895
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.