Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1262171 | https://doi.org/10.1155/2019/1262171

Jiucheng Xu, Nan Wang, Zhanwei Xu, Keqiang Xu, "Weighted Norm Sparse Error Constraint Based ADMM for Image Denoising", Mathematical Problems in Engineering, vol. 2019, Article ID 1262171, 15 pages, 2019. https://doi.org/10.1155/2019/1262171

Weighted Norm Sparse Error Constraint Based ADMM for Image Denoising

Academic Editor: Rafal Zdunek
Received20 Dec 2018
Revised06 Apr 2019
Accepted11 Apr 2019
Published09 May 2019

Abstract

In the process of image denoising, the accurate prior knowledge cannot be learned due to the influence of noise. Therefore, it is difficult to obtain better sparse coefficients. Based on this consideration, a weighted norm sparse error constraint (WPNSEC) model is proposed. Firstly, the suitable setting of power p in the norm is made a detailed analysis. Secondly, the proposed model is extended to color image denoising. Since the noise of RGB channels has different intensities, a weight matrix is introduced to measure the noise levels of different channels, and a multichannel weighted norm sparse error constraint algorithm is proposed. Thirdly, in order to ensure that the proposed algorithm is tractable, the multichannel WPNSEC model is converted into an equality constraint problem solved via alternating direction method of multipliers (ADMM) algorithm. Experimental results on gray image and color image datasets show that the proposed algorithms not only have higher peak signal-to-noise ratio (PSNR) and feature similarity index (FSIM) but also produce better visual quality than competing image denoising algorithms.

1. Introduction

In computer vision and image processing, one of the most fundamental problems is the influence of noise. To overcome this problem, image denoising has attracted more and more attention. Image denoising is designed for image quality enhancement, aiming to remove noise N from the noisy observation Y and recover the clean image X. N is often assumed to be AWGN (additive white Gaussian noise) with a standard deviation . Recently, a number of image denoising methods have been reported, including sparse representation [13], nonlocal self-similarity [47], dictionary learning [1, 8], and deep learning [9, 10].

The purpose of the sparse representation methods is to express most of the original signals with fewer fundamental signals. The previous models are based on the pixel level primarily, such as the TV (total variation) method [11], and Paul et al. [12] proposed an efficient minimization method for a generalized total variation functional. Since this method damages the detailed features of the images, patch-based sparse representation methods have been proposed. Transforming the image patches into a sparse linear combination in a dictionary is one of the most representative methods [1316]. Liu et al. [17] proposed an augmented Lagrangian approach to general dictionary learning. However, the patch-based sparse representation model often assumes that the image patches are independent, ignoring the correlation among nonlocal similar patches. In addition, the time complexity is high for dictionary learning. In recent years, a large number of nonlocal self-similarity works have been proposed [1820]. Gu et al. [21] proposed the weighted nuclear norm minimization and applied it to low-level vision, which has favorable reconstruction performance. Owing to the norm being NP-hard, the existing sparse representation methods for image denoising often replace norm convex optimization with norm [22], and Zhao et al. [23] proposed a norm low-rank matrix factorization. But it is difficult to obtain accurate sparse results by using the norm convex optimization in some inverse problems. Therefore, Xie et al. [24] proposed a weighted Schatten norm minimization. Zha et al. [25] proposed a nonconvex norm minimization based similar group sparse representation model. And Wang et al. [26] proposed a nonconvex weighted norm minimization based group sparse representation framework. Thus, it is not trivial to utilize the nonlocal similarity property of images. In addition, learning dictionary from noise images ignores the impact of noise and reduces the accuracy of sparse coefficients. And the power p in the norm is set artificially, which may be not sufficient to reflect the effectiveness of the algorithm.

At present, the most existing image denoising algorithms are devised for gray images. Since the color information plays an important role in image understanding and object recognition, color image denoising is crucial. The color image includes RGB channels with strong correlation. In [27, 28], the gray image denoising algorithms were applied to three channels of the color image, respectively, whereas the key issue of this method fails to consider the correlation among the channels. It not only increases the time consumption but also generates some wrong colors and artifacts. In [29, 30], the image patches of RGB channels stitched into one vector were treated equally, therefore, their defect overlooks the distinctive noise statistics among the channels. Nam et al. [31] proposed a cross-channel real image denoising algorithm, improving the performance of color image denoising.

In view of the above problems, a weighted norm sparse error constrain (WPNSEC) model is proposed, which constrains the sparse error based on the nonconvex weighted norm. The proposed model reduces the influence of noise on dictionary learning. At the same time, the sparse error is decreased and the accuracy of the sparse coefficient is promoted. Then, the WPNSEC model is extended to color image denoising and a cross-channel color image denoising algorithm is proposed. The proposed algorithm makes use of the nonlocal self-similarity of images and decreases the time consumption. Since the noise of the RGB channels has different intensities, a weight matrix is introduced to measure the noise levels of different channels.

The rest of this paper is structured as follows. In Section 2, the cross-channel similar patch group and a nonconvex weighted norm model are introduced. In Section 3, first, the suitability setting of power p is discussed in detail. Second, a sparse error constraint is added to the weight norm and the WPNSEC model is constructed. Third, the multichannel WPNSEC algorithm is proposed and the model is solved by using the ADMM algorithm. In Section 4, the experimental results of gray image denoising and color image denoising are presented. Section 5 concludes this paper.

2.1. Cross-Channel Similar Patch Group

The patch group with nonlocal self-similarity has achieved a great success for image denoising [21]. The purpose of image denoising is to reconstruct a clear image X=Y-N from the observed degradation noise image Y, where N represents noise. For a noise image Y, the image Y is partitioned n overlapping image patches and the size of each image patch is . If Y is a color image, the cross-channel color image patches of size denote , where are the patches in R, G, and B channels. The standard deviations of noise in RGB channels are . For each local image patch , the most similar image patches are extracted exploiting Euclidean distance in a search window around it. All similar patches are put into a matrix column by column. The matrix with all similar noise patches is denoted as noise similar patch group , where represents the m-th similar patch of the i-th similar patch group in c channel. The SVD of is , and the nonzero elements in are expressed as . The dictionary , and K represents the number of dictionary atoms.

2.2. Weighted Norm Minimization

The norm minimization problem aims to estimate the true sparse result under certain constraint conditions. Traditional patch-based sparse coding is resolved by using the norm and the weighted norm, regularly. However, for some image inverse problems, such as image deblurring, image denoising, and other image restoration problems, the convex regularization does not get accurate sparse results. The reason is that the sparsity of an image cannot be measured by any benchmark. In [26], in order to improve the accuracy of sparse representation consequence, the convex norm was substituted by the nonconvex norm. For image denoising, the weighted penalty function is extended to the sparse representation based patch group, and the weighted norm minimization (WPNM) can be represented aswhere denotes the clean patch group: . is the sparse coefficient of the i-th patch group. represents nonconvex norm. denotes the dot product between vectors and is weight and its updating formula is . c is a constant and is the estimated variance of .

3. The Proposed Image Denoising Algorithm

3.1. The Setting of Power

A classical image denoising model often includes a fidelity term and a regularization term based on image prior knowledge. Some recent works have proven that the denoising algorithm is very efficient by using the nonconvex sparse coding based image prior. However, in the low rank matrix approximation problem, the singular values obtained by the weighted Schatten p norm minimization are overshrunk, which reduce the accuracy of the sparse coefficient. The weighted Schatten p norm minimization problem can be represented asThe SVD of the matrix X is . In this paper, we use the GST algorithm to solve the nonconvex sparse coding problem. In order to get better denoising effects, the GST algorithm needs to converge to a better minimum. The soft-thresholding operator isThe nonconvex weighted norm minimization problem isBecause of , (4) can be rewritten:

where and are the vectors of matrices and and the soft-thresholding operator is

It can be proven that the nonconvex weighted norm is equivalent to the weighted Schatten p minimization problem. In [24], when the power p gets larger and larger, the singular value of the clear image patches matrix is more different from the singular value of the matrix solved by the WSNM algorithm. Therefore, more high rank components of the singular values matrix solved by the WSNM algorithm will become 0 as the power p increases, while the low rank components will get closer to the singular values of the clear similar patches. In other words, the overshrunk problem also exists in the nonconvex weighted norm minimization. However, it assigns the values of the power p of the nonconvex weighted norm minimization manually. In order to achieve more robust denoising performance, the suitable value of power p is set through experiments. It reduces the uncertainty of manual parameters setting. There are 20 images selected randomly from the Berkeley Segmentation Dataset [32]. For different levels of noise, the image denoising performance (PSNR) is tested by using the WPNSEC algorithm with different power p. The value of power p changes from 0.05 to 1 in steps of 0.05, gradually. As shown in Figure 1, six noise levels are 20, 30, 40, 50, 60, and 80, respectively. For each noise level, the optimal value of the power p will be directly applied to the experimental part of the next section.

In Figure 1, the ordinate denotes the average value of the PSNR on 20 images, and the abscissa is the value of p. The best value of p is higher when dealing with lower levels of noise. Figure 1 shows that the noise levels are 20 and 30 and the optimal values of power p are 0.9 and 0.8, respectively. When the noise levels are 40 and 50, the optimal value of power p is 0.75. As the level of the noise increases, more rank elements in the matrix are contaminated by noise. And the higher the rank, the greater the effect. Therefore, when the level of noise is high, the best value of power p is small. Meanwhile, when the noise levels are 60 and 80, the optimal values of power p are 0.5 and 0.2, respectively. In short, the best value of power p is inversely proportional to the noise level and the best value of power p is smaller when the noise levels become stronger.

3.2. Weighted Norm Sparse Error Constraint Model

From Section 2.2, we can learn that the sparse coefficients are obtained from the noise image directly. Since the detail information of the noise image has been destroyed by noise, the sparse coefficient of the denoised patch group is not accurate. Assume that the sparse coefficient of the clear image patch group is . There are errors between the noise sparse coefficient and the clear sparse coefficient . Thus, it is extremely important to reduce the sparse error and raise the accuracy of . The noise sparse coefficient is expected to be as close as possible to . The sparse error constraint is added to the weighted norm, and the weighted norm sparse error constraint model (WPNSEC) is proposed. However, the clear image X is unknown. The noise image Y will be preoperated using the BM3D algorithm, and a great estimate of the sparse coefficient is obtained. Since the distribution of sparse error is closer to the Laplacian, the sparse error is solved by using the norm. The WPNSEC model is represented aswhere denotes regularization parameters. To represent following the formulas clearly, we omit the number of overlapping image patches i.

Since the formula is a nonconvex norm and is a convex norm, the two formulas cannot be solved at the same time. Therefore, (8) is divided into two subproblems: the sparse coefficient of the weighted norm minimization is solved by GST algorithm first; then, the sparse coefficient of the sparse error constraint is solved by surrogate algorithm. Finally, the average value between and is set as the final sparse coefficient. Equation (8) is rewritten asThe description of WPNSEC model is shown in Algorithm 1.

Require: Noisy image Y.
Ensure: Denoised image .
 1: Initialization: , c, , ;
 2: For  t = 0, 1, , do
 3:   Set ;
 4:   Extract local image patches () from
    the noise image ;
 5:   For each local image patch do
 6:     The nonlocal similar patches are searched
      by Euclidean distance;
 7:     Stack similar patches form a noisy similar
      patch group ;
 8:     Update by GST algorithm;
 9:     Update by surrogate algorithm;
 10:    Update by Eq. (9);
 11:  End for
 12:  Aggregate to form the clean image ;
 13: End for
 14: Return The denoised image ;
3.3. The Multichannel WPNSEC Model

At present, the most existing image denoising algorithms are devised for gray images. Since the color images play a crucial role in real life, it is necessary to improve color image quality. Thus, the WPNSEC model is extended to color image denoising and a multichannel WPNSEC color image denoising algorithm is proposed. For the color image denoising, it is unreasonable to apply the gray image denoising algorithm to the cross-channel noise image patch group directly. To remove the noise of color images effectively, the strength of noise in different color channels should be taken into account. So, a weight matrix W [33] is introduced to balance the noise of the RGB channels. The weight W is a diagonal matrix and the diagonal elements are :

The multichannel WPNSEC minimization is as follows:

Since (11) is a large-scale nonconvex optimization problem, it is very difficult to solve it. The nonconvex optimization problem is easily solved under the ADMM framework. Updating procedures of the ADMM method are shown in Algorithm 2. First, utilizing variable splitting scheme, (11) is transformed into an equality-constrained problem. Then the multichannel WPNSEC model is broken down into two variables by introducing an auxiliary variable z. Equation (11) is reformulated as

Require: Noisy image Y and weight W, > 1, .
Ensure: Sparse coefficient and auxiliary variable z.
 1: Initialization: t, > 0, z, and b;
 2: Repeat
 3:   Update z as ;
 4:   Update as ;
 5:   Update b as ;
 6:   Update as ;
 7:   ;
 8: Until
 9: The convergence condition is satisfied or .

The Lagrangian function of (12) iswhere b is the Lagrangian multiplier, is the penalty parameter, and . In the t-th (t = 1, 2, ) iteration, the Lagrangian multiplier b, the penalty parameter , the optimization variables z, and are represented as , respectively. Equation (13) based on the ADMM is transformed asNext, the effective solution for each subquestion will be introduced. The specific updating is as follows.

(1) According to (14), update z when fixing and b:Obviously, (17) has a closed-form solution. So we have

(2) According to (15), update when fixing z and b:In the following proof, in order to avoid parameter confusion, the iteration number t is omitted. Equation (19) is simplified asLet R=z-b; (20) is rewritten asConsidering that is nonconvex norm, it is difficult to solve (21). To obtain a tractable solution, Theorem 3 in [25] is employed: where . Since , thenSo (22) is rewritten asHence, (19) is simplified as (24), and the generalized soft threshold (GST) algorithm [34] is exploited to solve (24) effectively. We have the following equation:where denotes generalized soft-thresholding operator; J is the iteration number of GST algorithm. If , the global minimum is =0, where the threshold . Otherwise, the optimal value will be at a nonzero point.

(3) According to (16), update b when fixing z and :

(4) Update the value of :

Since the weighted norm is nonconvex, the unbounded sequence of can sustain the convergence in Algorithm 2. By solving the above subproblems separately, an effective solution is obtained. The multichannel WPNSEC denoising algorithm is summarized in Algorithm 3.

Require: Noisy image Y, and .
Ensure: Denoised image .
 1: Initialization: ;
 2: For   do
 3:   Set ;
 4:   Extract a local image patches from the noise image ;
 5:   For each local patch do
 6:     The non-local similar patches are searched by Euclidean distance;
 7:     Stack similar patches form a cross-channel noisy patch group ;
 8:     Update by WPNSEC model;
 9:     Apply Eq. (4) to solve and obtain the estimation ;
 10:  End for
 11:  Aggregate to form the clean image ;
 12: End for
 13: Return The denoised image ;

4. Experimental Results and Analysis

4.1. Parameters Selection

In gray image denoising experiment, the noise standard deviation is 10, 20, 30, 40, 50, 60, and 80. When the noise standard deviation , the size of the overlapping block is 6 6; when the noise standard deviation is , the size of the overlapping block is 7 7; when the noise standard deviation is , the size of the overlapping block is 8 8; when the noise standard deviation is , the size of the overlapping block is 9 9. The size of the searching window for selecting similar patches is 40 40, and the number of similar patches of each window is 70. When , is (0.2, 0.18, 0.67); otherwise, is (0.3, 0.22, 0.67). The simulation experiment carried out with the gray test images is shown in Figure 2 (from left to right and from top to bottom: airplane, barbara, boat, cameraman, couple, foreman, gold hill, house, leaves, lena, lin, monarch, parrots, peppers, girl, and man).

The parameters of the multichannel WPNSEC algorithm for color image denoising are set as follows in detail: the size of searching window is , and the size of each image patch is l=6. The number of the nonlocal similar patches is m=60. The updating parameter is =1.001 and the initial penalty parameter is =3. The numbers of iterations in Algorithms 2 and 3 are =10 and =8, respectively. In addition, all experiments are performed under the Matlab-R2016 environment on a machine with AMD Athlon(tm) II X4 645 Processor CPU 4GB RAM.

4.2. Advantages of the Weighted Norm Sparse Error Constraint

This section demonstrates the advantages of the proposed algorithm. Figure 3 shows the denoised results of nonconvex weighted norm minimization model and weighted norm sparse error constraint method on test image parrot. In Figure 3, an image patch is cropped randomly from the denoised image (marked by a red box). In order to observe clearly, this small patch is enlarged and pasted into the lower left corner of the denoised image.

Figure 3(a) reflects that the edge information around the parrot’s eye is well recovered, but it is too smooth in the flat area to lose some detailed information. The reason is that the detailed information of the noise image has been destroyed by noise. The sparse coefficient obtained from the noise image will decrease the denoising performance of the algorithm. The proposed algorithm introduces a sparse error constraint, which reduces the influence of noise on sparse coefficients and improves the denoising performance. In Figure 3(b), the edge information around the parrot’s eyes is retained well and the texture information of the smooth area is better restored.

To observe that the WPNSEC algorithm outperforms the WPNM algorithm intuitively, a line graph is shown in Figure 4. Figure 4 reflects that the PSNR of the proposed algorithm and WPNM algorithm decreases gradually when the noise standard deviation increases. And the PSNR of the WNPSEC algorithm is higher than competing method under different noise standard deviations .

4.3. Results on Gray Noisy Images Denoising

The proposed WPNSEC algorithm is compared with several related classical algorithms, including BM3D [35], EPLL [36], NCSR [37], WNNM [20], and WSNM [24]. In order to evaluate the performance of the WPNSEC algorithm in gray image denoising, the peak signal-to-noise ratio (PSNR) and feature similarity index (FSIM) are used as the standard of the experimental results. The PSNR values of the proposed algorithm and all competing algorithms on 10 test images are listed in Table 1, and the FSIM values of the proposed algorithm and all competing algorithms on 10 test images are displayed in Table 2. The noise level is = 10, = 20, = 30, and = 50 from top to bottom. An overall impression can be observed from Table 1; when the noise level increases from 20 to 50, the improvements of WPNSEC increase 0.60dB, 0.44dB, and 0.52dB on average, respectively. And the average PSNR of proposed algorithm increases by 0.96 dB, 1.32 dB, 0.66 dB, 0.63 dB, and 0.56 dB over the BM3D, EPLL, NCSR, WNNM, and WSNM. In summary, although the PSNR value decreases with the increase of the standard deviation, most of the PSNR values of the proposed algorithm are higher than those in other related algorithms and the average PSNR is higher than that in all competing algorithms. In order to observe the comparison between our proposed algorithm intuitively and the competing methods on PSNR and FSIM, a line graph of the correlation algorithm is shown in Figure 5, which is drawn by average PSNR (dB) and FSIM of different denoising algorithms on 16 gray images with standard deviation . At the same time, in order to reflect the superiority of the WPNSEC algorithm, the average running time of all algorithms is shown in Table 3.


=10

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch33.2433.1733.2133.4033.5134.93
Couple32.1732.0832.5032.8432.9335.81
House34.4234.2135.1635.4235.5736.83
Lena32.8933.1634.3134.6234.8135.49
Parrot32.5932.2133.6534.4434.4935.75
C. man32.5232.5533.1733.4833.6234.24
Boat32.5732.6432.9132.8932.9733.70
Leaves31.8432.1733.0433.1133.3534.95
Peppers33.2433.2933.2733.3433.4734.67
Airplane33.3233.2133.4133.5233.7434.61
Average32.8832.8733.4633.7133.8535.10

=20

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch30.3530.4830.69 31.1031.1332.37
Couple30.7630.54 30.5630.8230.8332.63
House33.7732.9833.97 34.01 34.0534.08
Lena31.6032.6132.9233.1233.13 32.86
Parrot 29.9629.9732.1630.1930.2133.03
C. man30.2830.3430.48 30.7530.7731.47
Boat29.69 30.6630.7431.00 31.0231.26
Leaves 29.0830.7630.5930.7430.9631.12
Peppers31.1431.1731.26 31.5331.5531.86
Airplane30.44 32.4132.5832.8232.8233.06
Average 30.7131.1931.6031.6531.7432.26

=30

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch28.3628.35 28.4628.9128.9328.96
Couple30.0828.6128.5728.9829.0230.68
House32.08 31.2232.0732.5232.5432.69
Lena29.5531.4129.4329.8331.4830.08
Parrot29.1328.0730.3828.3328.3330.61
C. man28.63 28.3628.5828.8028.8329.31
Boat29.11 28.8928.9429.2429.2629.52
Leaves27.8227.38 28.1428.6128.6228.64
Peppers29.2828.3531.1129.4829.54 30.56
Airplane27.5630.4130.70 30.87 30.9231.21
Average29.1629.1129.6429.5629.7530.23

=50

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch25.5125.7725.7826.1826.3026.32
Couple26.4626.2326.1927.4526.7128.30
House29.6928.7629.6230.2330.31 30.29
Lena27.0726.4227.1227.1627.2827.40
Parrot25.8925.8327.8826.0026.1028.23
C. man26.1326.0326.1526.4226.4426.62
Boat26.7826.6526.6726.9727.0126.88
Leaves24.7524.3624.9625.4925.5425.69
Peppers26.6826.6228.0726.8126.9428.77
Airplane25.1027.8828.1828.4428.4928.46
Average26.4126.4627.0627.1227.1127.70


=10

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch0.95780.95640.96240.95500.95670.9661
Couple0.97940.97810.98180.98150.98430.9897
House0.95110.94620.96010.95890.95900.9592
Lena0.96530.96330.98190.98200.98310.9872
Parrot0.93670.93640.96180.95490.95630.9621
C. man0.95530.95610.95530.95300.95410.9590
Boat0.97230.97260.98130.98140.98320.9879
Leaves0.94920.95120.97300.96280.96420.9743
Peppers0.95870.95730.96110.95300.95430.9612
Airplane0.93960.94680.95630.95150.95320.9591
Average0.95650.95640.96750.96340.96480.9706

=20

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch0.92980.92940.93200.91760.91870.9389
Couple0.95730.94610.95550.95320.95410.9583
House0.91900.91730.92010.91770.91820.9202
Lena0.95410.95510.95980.96010.96030.9357
Parrot0.93450.93020.93480.92120.92370.9356
C. man0.90970.90950.90860.90960.91250.9175
Boat0.95510.95370.95530.95610.95740.9621
Leaves0.95270.94930.94690.92960.93260.9503
Peppers0.93220.93140.93270.91490.91610.9341
Airplane0.89950.91720.91850.91160.91420.9218
Average0.93440.93390.93670.92920.93080.9375

=30

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch0.90130.89940.90820.88980.89650.9183
Couple0.92790.92730.92850.93230.93440.9311
House0.89910.89870.90000.89300.89520.9023
Lena0.95630.95310.94460.94520.94490.9453
Parrot0.91940.91750.92080.89990.90510.9279
C. man0.86320.86510.87070.87370.87390.8746
Boat0.93510.92930.93090.93460.93510.9390
Leaves0.92660.91850.92790.90410.91840.9284
Peppers0.89960.90420.91260.88970.89310.9115
Airplane0.87910.87880.88970.88650.88950.8967
Average0.91080.90920.91300.90490.90860.9175

=50

ImageBM3DEPLLNCSRWNNMWSNMWPNSEC
Monarch0.86010.85520.86150.85170.85730.8821
Couple0.87460.87650.88000.89130.89280.8922
House0.85730.84930.86750.87060.87410.8834
Lena0.89210.88960.91910.91770.91820.9210
Parrot0.89310.87790.89260.88550.88600.8994
C. man0.79220.78690.80910.82790.82920.8350
Boat0.88760.88150.88470.89540.89530.8939
Leaves0.85640.84750.88650.87540.87910.9011
Peppers0.86930.85310.86850.84890.85040.8777
Airplane0.83600.82560.83780.84200.84320.8483
Average0.86190.85430.87070.87050.87260.8834


Algorithms BM3D EPLL NCSR WNNM WSNM WPNSEC

Times2.67103.78374.01169.32174.2189.64

In order to display the visual quality of the proposed denoising algorithm, this part uses the leaves test image with = 20 to carry out a simulation experiment, and the experimental results are shown in Figure 6. It can be clearly seen from the highlighted red window in Figure 6 that the WPNSEC algorithm recovers the texture of the leaves well, but other competing denoising algorithms contain more noise and produce many artifacts. In Figure 6(c), since the EPLL algorithm ignores the nonlocal self-similarity, the details of the texture and structure edges are seriously lost, which affects the visual perception. The WPNSEC algorithm that is based on similar patches group preserves the image features better while effectively removing the image noise. According to the above analysis, the proposed algorithm has robust denoising performance, which can not only obtain higher PSNR and FSIM indices but also produce better visual quality.

In Algorithm 1, the size of an image patch is , where l represents the size of the image patch and m is the number of similar patches in a similar patches group. The constructing dictionary is the main computation in each iteration (Step 5 in Algorithm 1), and it costs . The generalized soft threshold algorithm (Step 8 in Algorithm 1) only costs ), where K represents the number of iterations in the generalized soft threshold algorithm. Therefore, the cost for the proposed WPNSEC image denoising algorithm is , represents the number of iterations of Algorithm 1, and S represents the number of image patches in a sliding window.

4.4. Results on Color Noisy Images Denoising

In this subsection, the multichannel WPNSEC method is compared with other competing denoising methods, including CBM3D [38], MLP [39], DnCNN [40], TNRD[41], and MC-WNNM [33], on the 24 color images displayed in Figure 7 (from left to right and from top to bottom: gate, red door, caps, bikes, sailing 1, flower buildings, sailing 4, couple, stream, rapids, girl, ocean, lighthouse 1, house, painted house, parrots, plane, woman hat, sailing 2, sailing 3, statue, woman, and lighthouse 2). In order to obtain the color noise images, additive white Gaussian noise is added to each channel of the clear image, respectively: =40, =20, and =30.

The PSNR values of competing methods on the Kodak PhotoCD dataset are reported in Table 4 and the PSNR average values of all methods on the CBSD68 dataset are reflected in Table 5. The highest PSNR values for each color image are bold. Table 4 shows that most of the PSNR values of our proposed algorithm are higher than the PSNR values of other competing methods and that the average value of the PSNR is the highest. The improvements of multichannel WPNSEC achieve 1.84dB, 0.43dB, 0.29dB, 8.39dB, and 0.27dB over CBM3D, MLP, TNDR, DnCNN, and MC-WNNM on average, respectively. At the same time, in order to reflect the high efficiency of the proposed algorithm, the average running time of each algorithm is shown in Table 6.



ImageCBM3DMLPTNRDDnCNNMC-WNNMOurs
Gate25.2425.7025.7420.4725.8326.13
Red door28.2730.1230.2120.4729.7530.19
Caps28.8131.1931.4920.5331.5431.64
Woman hat27.9529.8829.8620.4729.9430.37
Bikes25.0326.0026.1820.5225.7426.52
Sailing 126.2426.8426.9020.6627.0027.42
Flower27.8830.2830.4020.5230.5330.47
Buildings25.0525.5925.8320.5725.9526.32
Sailing 228.4430.7530.8120.5030.9531.04
Sailing 328.2730.3830.5720.5230.5230.83
Sailing 426.9528.0028.1420.5228.2328.43
Couple28.7630.8731.0520.6031.1231.41
Stream23.7623.9523.9920.5224.0124.63
Rapids26.0226.9727.1120.5127.0327.21
Girl28.3830.1530.4420.7130.5230.76
Ocean27.7528.8228.8720.5229.0029.27
Statue27.9029.5729.8020.5629.6029.76
Woman25.7726.4026.4120.5326.5226.73
Lighthouse 127.3028.6728.8120.5328.8929.34
Plane28.9630.4030.7621.4430.8030.92
Lighthouse 226.5427.5327.6020.5127.5927.91
House27.0528.1728.2720.5128.0028.27
Parrots29.1432.3132.5120.5432.0632.19
Painted house25.7526.4126.5320.5926.6427.42
Average27.1328.5428.6820.5828.7028.97


Algorithms CBM3D MLP TNRD DnCNN MC-WNNM Ours

Average26.2827.6927.9319.7428.0528.22


Algorithms CBM3D MLP TNRD DnCNN MC-WNNM Ours

Times3.693.92-190.731127.91132.47

The denoised images of our proposed algorithm and all competing methods on color image buildings are shown in Figure 8. In order to observe the details of the image more clearly, a red window is used to select a partial block of the image and zoom in it. Figure 8 shows that the denoising quality of the multichannel WPNSEC algorithm is better than other algorithms. DnCNN, MLP, and CBM3D can preserve as much edge information as possible when removing the noise, but these algorithms produce some distortion and artifacts. For example, in Figures 8(c), 8(d), and 8(f), there are a lot of noises on the windows of the house and a large amount of artifacts in the smooth area. TNRD method reduces noise and artifacts, so it has better denoising effect than DnCNN, MLP, and CBM3D. Due to the oversmoothing effect in the TNRD method, some boundaries and details are lost. For instance, in Figure 8(e), the red mark in the lower left corner of the red window is unclear. MC-WNNM is a state-of-the-art image denoising algorithm, but it is constructed based on convex optimization, so the sparse coefficients are a lack of robustness. Although the MC-WNNM algorithm performs well on denoising quality, it is not as good as the multichannel WPNSEC algorithm. As can be seen from Figure 8(g), the details of the letters and the red mark in the lower left window are slightly lost. In Figure 8(h), the edge and detail information can be well protected, the background is smoother, and it gets a better visual experience. Although our proposed algorithm does not achieve a clear representation of all the details (such as the red mark), it shows the best visual effect in the competing methods.

In Algorithm 2, the size of a color image patch is m, where l represents the size of the image patch and m is the number of similar patches in a similar patches group. Updating Z costs and updating costs . The cost for updating b and can be ignored. Therefore, the cost of the ADMM algorithm is , where represents the number of iterations of Algorithm 2. In Algorithm 3, for multichannel color image denoising, the overall cost is , where represents the number of iterations of Algorithm 3 and S represents the number of image patches in a sliding window.

5. Conclusion

Due to the influence of noise, there is a sparse error between the estimated sparse coefficient and the true sparse coefficient. The sparse error makes the image denoising more challenging. Aiming at this problem, a weighted norm sparse error constraint (WPNSEC) framework for image denoising is proposed. The image denoising is transformed into the problem reducing sparse error and finding the optimal sparse coefficient. Specifically, the suitability setting of power p in the norm is analyzed in detail. In addition, the WPNSEC framework is extended to color image denoising, and a multichannel WPNSEC model is proposed. The proposed model is solved by the ADMM algorithm. Considering that the noise of the RGB channels has different statistics, a weight matrix is introduced. The experiment results of gray image denoising and color image denoising demonstrate that the WPNSEC algorithm and the multichannel WPNSEC algorithm outperform the current state-of-the-art denoising methods.

Data Availability

The gray images used for image denoising algorithm can be downloaded in the public data sources Set12 dataset and BSD68 dataset. The color images used for image denoising algorithm can be downloaded in the public data sources Kodak PhotoCD dataset and CBSD68 dataset.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 61772176, 61370169, and 61402153), the Plan for Scientific Innovation Talent of Henan Province (no. 184100510003), the Project of Science and Technology Department of Henan Province of China (nos. 182102210362 and 162102210261), and the Young Scholar Program of Henan Province (no. 2017GGJS041).

References

  1. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006. View at: Publisher Site | Google Scholar
  2. J. Jiang, J. Ma, C. Chen, X. Jiang, and Z. Wang, “Noise robust face image super-resolution through smooth sparse representation,” IEEE Transactions on Cybernetics, vol. 47, no. 11, pp. 3991–4002, 2017. View at: Publisher Site | Google Scholar
  3. Z. Zha, X. Zhang, Y. Wu et al., “Non-convex weighted lp nuclear norm based ADMM framework for image restoration,” Neurocomputing, vol. 311, pp. 209–224, 2018. View at: Publisher Site | Google Scholar
  4. A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 2, pp. 60–65, June 2005. View at: Publisher Site | Google Scholar
  5. J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2272–2279, October 2009. View at: Publisher Site | Google Scholar
  6. C. Zuo, L. Jovanov, B. Goossens et al., “Image denoising using quadtree-based nonlocal means with locally adaptive principal component analysis,” IEEE Signal Processing Letters, vol. 23, no. 4, pp. 434–438, 2016. View at: Publisher Site | Google Scholar
  7. L. Jia, Q. Zhang, Y. Shang et al., “Denoising for low-dose CT image by discriminative weighted nuclear norm minimization,” IEEE Access, vol. 6, pp. 46179–46193, 2018. View at: Publisher Site | Google Scholar
  8. L. Jia, S. Song, L. Yao et al., “Image denoising via sparse representation over grouped dictionaries with adaptive atom size elect,” IEEE Access, vol. 5, pp. 22514–22529, 2017. View at: Publisher Site | Google Scholar
  9. K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  10. Y. You, C. Lu, W. Wang, and C.-K. Tang, “Relative CNN-RNN: Learning relative atmospheric visibility from images,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 45–55, 2019. View at: Publisher Site | Google Scholar | MathSciNet
  11. A. Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 89–97, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  12. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. View at: Publisher Site | Google Scholar
  13. Q. Zhang and B. Li, “Discriminative K-SVD for dictionary learning in face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2691–2698, June 2010. View at: Publisher Site | Google Scholar
  14. Z. Jiang, Z. Lin, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2651–2664, 2013. View at: Publisher Site | Google Scholar
  15. D. P. K. Lun, “Robust fringe projection profilometry via sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1726–1739, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  16. J. Zhang, D. B. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3336–3351, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  17. S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” in Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 2862–2869, June 2014. View at: Publisher Site | Google Scholar
  18. S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted nuclear norm minimization and its applications to low level vision,” International Journal of Computer Vision, vol. 121, no. 2, pp. 183–208, 2017. View at: Publisher Site | Google Scholar
  19. J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng, “Patch group based nonlocal self-similarity prior learning for image denoising,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 244–252, December 2015. View at: Publisher Site | Google Scholar
  20. A. Eriksson and A. Van Den Hengel, “Efficient computation of robust lowrank matrix approximations in the presence of missing data using the l1 norm,” in Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2010, pp. 771–778, USA, June 2010. View at: Google Scholar
  21. Q. Zhao, D. Meng, Z. Xu, W. Zuo, and Y. Yan, “-norm low-rank matrix factorization by variational Bayesian method,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 825–839, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  22. Z. Y. Zha, X. Liu, and X. H. Huang, “Analyzing the group sparsity based on the rank minimization methods,” in Proceedings of IEEE International Conference on Multi-media and Expo, pp. 883–888, New York, NY, USA, 2017. View at: Google Scholar
  23. Q. Wang, X. Zhang, Y. Wu, L. Tang, and Z. Zha, “Non-convex weighted lp minimization based group sparse representation framework for image denoising,” IEEE Signal Processing Letters, vol. 24, no. 11, pp. 1686–1690, 2017. View at: Publisher Site | Google Scholar
  24. C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 299–314, 2008. View at: Publisher Site | Google Scholar
  25. J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 53–69, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  26. M. Lebrun, M. Colom, and J.-M. Morel, “Multiscale image blind denoising,” IEEE Transactions on Image Processing, vol. 24, no. 10, pp. 3149–3161, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  27. F. Zhu, G. Chen, and P. A. Heng, “From noise modeling to blind image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2016. View at: Google Scholar
  28. S. Nam, Y. Hwang, and Y. Matsushita, “A holistic approach to cross-channel image noise modeling and its application to image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1683–1691, June 2016. View at: Google Scholar
  29. W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1620–1630, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  30. The Berkeley Segmentation Dataset and Benchmark, http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/.
  31. J. Xu, L. Zhang, D. Zhang, and X. Feng, “Multi-channel Weighted Nuclear Norm Minimization for Real Color Image Denoising,” in Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 1105–1113, Venice, Italy, October 2017. View at: Publisher Site | Google Scholar
  32. W. M. Zuo, D. Y. Meng, and L. Zhang, “A generalized iterated shrinkage algorithm for non-convex sparse coding,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 217–224, New York, NY, USA, 2013. View at: Google Scholar
  33. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at: Publisher Site | Google Scholar
  34. E. Luo, S. H. Chan, and T. Q. Nguyen, “Adaptive image denoising by mixture adaptation,” IEEE Transactions on Image Processing, vol. 25, no. 10, pp. 4489–4503, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  35. Y. Xie, S. Gu, Y. Liu, W. Zuo, W. Zhang, and L. Zhang, “Weighted schatten p-norm minimization for image denoising and background subtraction,” IEEE Transactions on Image Processing, vol. 25, no. 10, pp. 4842–4857, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  36. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space,” in Proceedings of the 14th IEEE International Conference on Image Processing, ICIP 2007, vol. 1, pp. I313–I316, USA, September 2007. View at: Google Scholar
  37. H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pp. 2392–2399, June 2012. View at: Google Scholar
  38. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  39. Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffusion processes for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 5261–5269, USA, June 2015. View at: Google Scholar
  40. P. Rodríguez and B. Wohlberg, “Efficient minimization method for a generalized total variation functional,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 322–332, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  41. Q. Liu, S. Wang, J. Luo, Y. Zhu, and M. Ye, “An augmented Lagrangian approach to general dictionary learning for image denoising,” Journal of Visual Communication and Image Representation, vol. 23, no. 5, pp. 753–766, 2012. View at: Publisher Site | Google Scholar

Copyright © 2019 Jiucheng Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views536
Downloads396
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.