Research Article | Open Access
Ao Li, Hayaru Shouno, "Dictionary-Based Image Denoising by Fused-Lasso Atom Selection", Mathematical Problems in Engineering, vol. 2014, Article ID 368602, 10 pages, 2014. https://doi.org/10.1155/2014/368602
Dictionary-Based Image Denoising by Fused-Lasso Atom Selection
We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA) with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.
As an essential low-level image processing procedure, image denoising is studied extensively, which is also a classical inverse problem. The general observation with additive noise is modeled as: where is the noisy observation and and present the original image and white Gaussian noise, respectively.
With the degradation model in (1), many denoising algorithms are proposed during the past decades. From the earlier spatial and frequency domain filters [1, 2] to the lately developed wavelet and beyond wavelet based shrinkage methods [3–5]. Due to the the fact that the wavelet is not enough to present the local structure in images, an effective dictionary-learning algorithm called KSVD was proposed in  and achieved the good effect in image denoising task. In this method, a highly patch-based overcompleted dictionary is learned firstly, and then denosing is implemented by solving the sparse coding of each patch with assuming that the patches are sparsely representable under the learned dictionary. Foi et al. proposed the pointwise shape-adaptive DCT to be applied to the patch and its neighborhoods, which achieved the very sparse representation for noisy image and got the effective denoising results . To lower the complexity, an orthogonal dictionary learning method is proposed in , which intends to train a global dictionary by collecting samples from degradation image randomly. Though it achieves good performance in image restoration, some progress can still be made in learning better dictionaries to present the patches accurately. To better present the image, a two-stage denoising method with PCA is proposed in , which trained the dictionary by the local neighboring patches.
More recently, a set of approaches with nonlocal techniques (NL) is used for removing noise. The idea of NL can be tracked to , where the similar pixels are searched to weighted average the filtered pixel. But the weight in NL is determined only by the intensity distance between patches, so it cannot guarantee to search the patches with similar local geometric structure under the strong noise. Zhang et al. proposed a novel nonlocal mean method with application to MRI denoising . To make full use of nonlocal similarity, Mairal et al. proposed the nonlocal sparse model for image restoration . Also motivited by the NL, a collaborative denoising scheme called BM3D is proposed by Dabov et al. , where they took the patch matching to search the similar patches and grouped them into a 3D cube matrix. Then, the algorithm carried out the 3D sparse transform, such as the 3D Wavelet or 3D Curvelet, to the cube matrix and then removed the noise with Wiener filtering in the transform domain. Another effective method addressed the denoising problem under the kernel regression (KR) framework proposed by Takeda et al. . Many spatial classical denoising algorithms, such as the bilaterial filtering  and nonlocal mean [16, 17], can be seen as the special cases in KR with different constraints.
In this paper, we proposed a novel scheme for image denoising based on clustering dictionary learning. Firstly, we clustered the patches with similar geometric structure by taking a weight function as the feature. Secondly, we learned the patch-based dictionary by principle component analysis for each cluster. Lastly, we coded these patches by fused lasso and developed an iterative Split Bregman to solve it rapidly.
The rest of the paper is outlined as follows. In Section 2, we briefly review the kernel regression framework as we want to choose the weight function in KR as the feature for clustering. And then, we talk about how to learn a suitable dictionary to better describe the patches in each cluster in Section 3. An iterative Split Bregman to solve the fused lasso is proposed in Section 4, which is used to code the patches under the dictionary learned in Section 3 rapidly. Section 5 shows several experiments results compared to the current excellent algorithms and Section 6 concludes the paper with a summary.
2. Clustering with Weight Function in KR
Kernel regression is well studied in statistical signal processing. Recently, the KR is used to address many image restoration tasks, such as the denoising, interpolation, and deblurring . The kernel regression expression can be written as follows: where is the number of pixels in neighborhood. presents the th pixel in image, whose location is denoted by . is the local kernel to measure the distance between the center pixel and its neighborhoods, and is the smoothing parameter to control the penalization strength.
By (2), we can obtain that the key technique in KR was how to determine a effective form of kernel function, which was studied in literatures [19, 20]. Among these methods, the steering kernel is distinguished by producing the local regression weights, where local gradient was taken into consideration to analyze the similarity between the pixels in a neighborhood. The weights in steering kernel can be expressed as: where denotes the structure similarity between the th center pixel with the th pixel in neighborhood. is the covariance matrix with the gradient of the th pixel. Furthermore, the whole kernel consists of all weights in neighborhood.
Introduced in , the can present the underlying local structure of patch centered in th pixel. In addition, Takeda et al. pointed out that the different locations patched with different intensities but similar underlying structure will still produce the similar kernel. Generally, the clustering is implemented by the Euclidean measurement of intensity, such as the denoising algorithm in NLM. But, distinguished from the regression algorithm, what we want here is to learn a dictionary to describe the patches with similar geometric structure. That is, we do not require them to have similar intensity simultaneously. So, we can take as the feature to measure the structure similarity among the patches. The significant distinction between the general Euclidean measurement and KR weighted function is that the latter can obtain the patches with similar structure, but not the patches with similar intensity. To this end, we can take the weights formed from streeing kernel as the feature and it will show some advantage on learning the clustering dictionary to better describe the local structure in each cluster. Also, the norm can be used to measure the distance between the features, which show the anisotropy property. Next, we will talk about how to obtain the weights of each patch.
Conveniently, the matrix is divided into three components  and can be reformulated as follows: where and .
By (4), what we need is to determine the two variables and . To this end, we calculated the local gradient matrix of th pixel as follows: where and present the horizontal and vertical gradient operator, respectively; is an analysis window around the th pixel and is the number of pixels in the window. With the singular value decomposition (SVD), we can obtain
With and , we can calculate the parameters in (4) as: where is the regulation parameter, which is used to prevent the ratio from being degenerate. is used to keep from being zero.
We summarized the calculation of weights of each patch in Algorithm 1.
Then, we can cluster the ordering overlapping patches into subset ( is the cluster indicator) by -Means with norm to measure the similarity between the samples and clustering centers.
3. Adaptive PCA Dictionary Learning
Once the clusters are formed, we can learn a dictionary with the principal component analysis suited to each cluster independently. To this end, according to the general dictionary learning algorithm, we need to solve the following minimization: where is the centered samples matrix which satisfied . is the samples matrix, whose columns are the patches in the th cluster, and also we can present it as . is the mean vector of and denotes the Frobenius norm.
As the minimization in (8), the numerical method of alternate minimization is used to estimate the two variables. That is to say, we should estimate one variable while the other is fixed. Separately, we rewrite (8) as follows: where is the th column of . When the dictionary is fixed, the expression of is given by
The patches in the same cluster have similar structure to each other, so we do not require the dictionary to be redundant enough. So, to simplify the problem (11), we add the constraint to dictionary and formulation (11) can be changed to
The minimization problem in (12) can be approximated by finding the first principle components of centered matrix with PCA framework. And satisfied where is the number of pixels in each patch, is a constant, and is the noise standard deviation. is the singular value of and satisfied , if . The reason why we take (13) to choose is that it can discard the principle components that present the variance arisen by noise.
Our above learning method can train the dictionary with lower complexity. In additional, to make it more effective and compact, we also show a selection scenario in (13), which can tradeoff between the essential signal preservation and noise reduction.
We summarized the dictionary learning with PCA in Algorithm 2.
4. Patch-Based Coding by Fused Lasso
With the preparation work in Sections 2 and 3, in this paragraph, we will study the sparse coding with fused lasso . The reason why we take the fused lasso for sparse coding is that it not only constrains the sparsity of coefficients but also enforced the differences between the neighboring coefficients, which led to show some advantage in recovering the texture in image [22, 23]. So, we can recover the noisy image by minimizing the cost function of fused lasso, which is expressed as follows: where is the sample patch in image denoising task, can be seen as the dictionary, is the sparse coding of , and and are regularization parameters.
The reference  points out that solving fused lasso is computationally demanding. So we developed a novel scenario to solve the fused-lasso problem in (14) based on Split Bregman, which is originally proposed in  and used to solve the minimization successfully [25, 26]. Conveniently, we rewrite (14) as follows: where is a matrix, whose each row only contains two nonzero values −1 and 1 in the positions corresponding to the third term in (14). is given by
To solve (15), we introduce the idea with two auxiliary variables based on Split Bregman, which was proposed in our previous work in , and change (15) as the following form: Note that the minimization with the augmentation Lagrangian function of (17) can be expressed as follows: With the alternating direction method  (ADM), we can solve (18) with the following iterative algorithm:
Note that the initial idea to get the solution of (14) is presented in , which is called coordinate descent method (CDM). But, in practice, CDM is slow, complex, and difficult to implement to some specialized algorithms. In addition, it is not effective in some large-scale processing problems.
As to the quadratic minimization in (22), we can obtain the solution by solving the following linear equation: where is the identity matrix with the same size to .
We summarized the coding algorithm in Algorithm 3.
Now, with all the preparation works above, we can summarize the synthetic denoising algorithm in Algorithm 4.
5. Experiment Results and Analysis
We conducted various experiments on image denoising to demonstrate the performance of our proposed algorithm. We degrade the images by adding artificial zero mean Gaussian noise with different standard deviations. The test images are shown in Figure 1. The sizes of images in experiment are all 256 256, and the patch size is 8 8; that is, is 64 in Algorithm 1. Empirically, we set the parameters and for all the experiment images in Algorithm 1. We fixed the window size in (5) to be 9 9, where we found the small size, such as 5 5, may not capture the local geometric structure of underlying image data sometimes. Surely, the window size can be tuned according to concrete experiment requirement. In particular, we extend the image boundary with the “symmetric” type according to the window size. The clustering number in -Means is flexible, which is small in image with compact structure such as House in Figure 1, and large with complex structure such as Lena. According to the image underlying structure, in our experiments, the clustering number is optional between 5 to 10. The maximum iteration in Algorithm 4 is 10. In addition, the regularization parameters are set in the algorithms themselves.
We compared our proposed algorithm to several current excellent denoising approaches, including the FGTV method in , denoising with the dictionary learned by KSVD in  (DKSVD), BM3D with Wavelet (BM3DW) in , the kernel regression method in , and the two-stage denoising method with PCA (TSPCA) in . Due to the limited space, we only show the experiment results of Lena and Couple with noise standard deviation in Figures 2 and 3, respectively. Furthermore, the PSNR results of all the recovered images are reported in Figure 4.
(a) PSNR results of Lena
(b) PSNR results of House
(c) PSNR results of Couple
(d) PSNR results of Man
From the denoising results, we can note that the FGTV method has the worst visualization among the compared methods. Because it only constrains the total variation but does not consider the local structure adaptively, so it lost many details in original image and also showed the disadvantage in PSNR results. KR algorithm generates many mottled artifacts in denoised image, but it indeed preserves some texture by capturing the local structure with kernel function, such as the curtain in Couple. But its results declined rapidly with the increasing noise standard deviation both in visual quality and in PSNR. TSPCA generated specific smoothness in the recovered image and was weak to present the texture region. In addition, the other factor resulting in bad performance in texture is that the PCA dictionary is learned with neighboring patches, which will show weaker results than the nonlocal scheme. We can see the PSNR of BM3DW, DKSVD, and the proposed method are approximate to each other in Figure 4, although the BM3DW obtained the highest PSNR compared to other methods. But BM3DW and DKSVD have more distortions in part of texture regions than the proposed method. This is because in BM3DW, the wavelet is not a good representation for all types of images, such as the ones with some complex structures. Also, although the KSVD can obtain a dictionary learned by image itself to better present the structures in image, it produces a universal dictionary, which may be not effective to certain local structures. Compared to BM3DW and KSVD, the proposed method showed the advantage to present the underlying local geometric structures in image, such as the hairs in Lena and the face of woman in Couple.
In this paper, we propose a novel scheme for image denoising. To preserve the textures in image, we clustered the patches from the noisy image with the meaningful weights vector which can capture the underlying local structure. And then, we learned the dictionary to better present the patches for each cluster with PCA. Lastly, we coded the noisy patches with the learned dictionary by fused lasso and obtained recovery image. We compared our proposed scheme to some current excellent algorithms, and it can be seen that our method obtains the good performance both in visual quality and in PSNR among the compared methods. In addition, the dictionary learning and coding are performed independently in each cluster, so it can be easily parallelized by the processors with multiple cores when the image has been clustered. That means the proposed method can be used in the large-scale image denoising task and save more computational time effectively.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
- X. Jiangtao, W. Lei, and S. Zaifeng, “A switching weighted vector median filter based on edge detection,” Signal Processing, vol. 98, pp. 359–369, 2014.
- D. L. Lau and J. G. Gonzalez, “Closest-to-mean filter: an edge preserving smoother for Gaussian environments,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97), vol. 4, pp. 2593–2596, April 1997.
- J. Starck, E. J. Candes, and D. L. Donoho, “The curvelet transform for image denoising,” IEEE Transactions on Image Processing, vol. 11, no. 6, pp. 670–684, 2002.
- G. Y. Chen and B. Kégl, “Image denoising with complex ridgelets,” Pattern Recognition, vol. 40, no. 2, pp. 578–585, 2007.
- M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005.
- M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.
- A. Foi, V. Katkovnik, and K. Egiazarian, “Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1395–1411, 2007.
- C. Bao, J. Cai, and H. Ji, “Fast sparsity -based orthogonal dictionary learning for image restoration,” in IEEE International Conference on Computer Vision, pp. 3384–3319, 2013.
- L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising by principal component analysis with local pixel grouping,” Pattern Recognition, vol. 43, no. 4, pp. 1531–1549, 2010.
- L. Yaroslavsky, Digital Picture Processing: An Introduction, Springer, Berlin, Germany, 1985.
- X. Zhang, G. Hou, J. Ma et al., “Denoising MR images using non-local means filter with combined patch and pixel similarity,” Plos ONE, vol. 9, no. 6, pp. 1–12, 2014.
- J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2272–2279, October 2009.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
- H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 349–366, 2007.
- C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE 6th International Conference on Computer Vision, pp. 839–846, Washington, DC, USA, January 1998.
- A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005.
- Y. Zhan, M. Ding, L. Wu, and X. Zhang, “Nonlocal means method using weight refining for despeckling of ultrasound images,” Signal Processing, vol. 103, pp. 201–213, 2014.
- H. Takeda, S. Farsiu, and P. Milanfar, “Deblurring using regularized locally adaptive kernel regression,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 550–563, 2008.
- P. Yee and S. Haykin, “Pattern classification as an ill-posed, inverse problem: a regularization approach,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '93), vol. 1, pp. 597–600, April 1993.
- M. P. Wand and M. C. Jones, Kernel Smoothing, ser. Monographs on Statistics and Applied Probability, Chapman and Hall, New York, NY, USA, 1995.
- R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, “Sparsity and smoothness via the fused lasso,” Journal of the Royal Statistical Society B: Statistical Methodology, vol. 67, no. 1, pp. 91–108, 2005.
- J. Friedman, T. Hastie, H. Hofling, and R. Tibshirani, “Pathwise coordinate optimization,” The Annals of Applied Statistics, vol. 1, no. 2, pp. 302–332, 2007.
- H. Gao and H. Zhao, “Multilevel bioluminescence tomography based on radiative transfer equation part 1: 11 regularization,” Optics Express, vol. 18, no. 3, pp. 1854–1871, 2010.
- S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
- S. Osher, Y. Mao, B. Dong, and W. Yin, “Fast linearized Bregman iteration for compressive sensing and sparse denoising,” Communications in Mathematical Sciences, vol. 8, no. 1, pp. 93–111, 2010.
- T. Goldstein and S. Osher, “The split Bregman method for -regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009.
- A. Li, Y. Li, X. Yang, and Y. Liu, “Image restoration with dual-prior constraint models based on Split Bregman,” Optical Review, vol. 20, no. 6, pp. 491–195, 2013.
- E. Esser, “Applications of lagrangian-based alternating direction methods and connections to split Bregman,” CAM Report, UCLA, 2009.
- J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 1, pp. 293–318, 1992.
- A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2419–2434, 2009.
Copyright © 2014 Ao Li and Hayaru Shouno. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.