Research Article  Open Access
An Adaptive Total Generalized Variation Model with Augmented Lagrangian Method for Image Denoising
Abstract
We propose an adaptive total generalized variation (TGV) based model, aiming at achieving a balance between edge preservation and region smoothness for image denoising. The variable splitting (VS) and the classical augmented Lagrangian method (ALM) are used to solve the proposed model. With the proposed adaptive model and ALM, the regularization parameter, which balances the data fidelity and the regularizer, is refreshed with a closed form in each iterate, and the image denoising can be accomplished without manual interference. Numerical results indicate that our method is effective in staircasing effect suppression and holds superiority over some other stateoftheart methods both in quantitative and in qualitative assessment.
1. Introduction
In the past few decades, many variation or partial differential equation (PDE) based restoration models [1–7] have been proposed to recover images from degraded observations, due to the ability of preserving significant image features such as edges or textures. Among these models, the total variation (TV) model, also named the RudinOsherFatemi (ROF) model [1], is distinguished for excellent edge preserving ability and becomes one of the most widely used regularizers in image restoration [1, 2, 8–10]. In particular, the TV denoising problem is in the following form: where is an open bounded domain in two dimensions, is the image to be restored, is the observation containing Gaussian white noise, and is the regularization parameter which balances the regularization term and the data fidelity term. is the TV seminorm of the bounded variation (BV) space BV(). The TV model is highly effective in preserving edges and corners, compared with the quadratic Tikhonov model. However, only when the original image is piecewise constant, the TV model is proved to be optimal. In fact, staircasing effect usually appears because most of natural images are not piecewise constant. Staircasing effect cannot meet the demands of human vision, due to the new artificial edges which do not exist in original images.
To overcome the drawback of the TV model, researchers suggest introducing the higherorder derivatives of image functions [3–7, 12–16]. In order to eliminate the staircasing effect of TV model, Chambolle and Lions [14] proposed the following infimalconvolution minimization functional: where discontinuous components of the image are allotted to while regions of moderate slopes are assigned to . The above model was proved to be practically efficient. Later, a modified form of (2) was proposed in [5] and its regularizer is of the following form: That is, the secondorder derivative in (2) is substituted by the Laplacian in (3). The similar use of Laplacian operator can also be seen in some PDEbased methods [3].
Since the classical TV model could not distinguish jumps from smooth transitions, Chan et al. [12] considered an additional penalization of the discontinuities in images. Precisely, they adopt as the regularization term, where is a realvalued function whose value approaches 0 while approaches infinity. The absence of the staircasing effect for this choice was verified in [15].
Bredies et al. [17] proposed the concept of total generalized variation (TGV), which is considered to be the generalization of TV. The TGV model is defined as where denotes the image dimension, and, throughout this paper, we assume ; is the space of symmetric tensors on ; is the space of compactly supported symmetric tensor field; is fixed positive parameter. From the definition of , we learn that it involves the derivatives of of order one to . When and , degenerates to the classical TV. Thus TGV can be seen as a generalization of TV.
TGV involves and balances higherorder derivatives of . Image reconstruction with TGV regularization usually leads to result with piecewise polynomial intensities and sharp edges. Therefore, TGV can effectively suppress the staircasing effect. In [17], an accelerated firstorder method of Nesterov [18] was proposed to solve the TGVregularized denoising problem.
In this paper, we propose an adaptive secondorder TGVregularized model for denoising and derive an augmented Lagrangian approach to handle the suggested model. Our denoising model is as follows: According to the standard Lagrange duality, for a given , there exists a nonnegative such that is equivalent to (6). However, with (6), we can automatically estimate the regularization parameter . We first utilize an indicator function of the feasible set to transform problem (6) into an unconstrained one; then the variable splitting technique is applied to transform the resulting unconstrained problem into a problem with linear penalizing constraints; finally, the obtained constrained problem is solved by the alternating direction method of multipliers (ADMM) [19–22], which is an instance of the classical ALM. The resulting image denoising algorithm is effective in staircasing effect suppression compared with some TVbased denoising methods, due to the secondorder TGV regularizer. Besides, it achieves the adaptive estimation of the regularization parameter without inner iterative scheme. It is worth noting that the idea of this paper can be extended to TGV models with higher order than two. However, for simplicity, we only treat the secondorder model and this is adequate for a large class of natural images.
Our method differs from the previous works on at least two aspects. On one hand, compared with [16], which adopted the accelerated firstorder method of Nesterov [18] to handle the unconstrained TGVbased denoising problem (7), we apply ALM to the constrained TGVbased denoising problem (6) and achieve the automatic estimation of the regularization parameter . Our strategy avoids the extra cost on the manual selection of by tryanderror. On the other hand, compared with the existing TVbased adaptive methods [10, 23, 24], we propose a more complicated adaptive method based on TGV, and it is apt to achieve more attractive results than the TVbased methods.
The outline of the rest of the paper is organized as follows. Section 2 provides the description of the adaptive secondorder TGVbased model for image denoising. Based on the Lagrange duality, an equivalent form of is suggested. The derivation of the proposed method is presented in Section 3. Section 4 gives the numerical results that demonstrate the effectiveness of the proposed method. At last, Section 5 ends this paper with a brief conclusion.
2. Adaptive SecondOrder TGVBased Model for Image Denoising
The space of bounded generalized variation (BGV) functions of order with weight is defined as Correspondingly, the BGV norm is defined as The TGV seminorm rather than the BGV norm is usually used as a regularizer.
In this paper, we just take into consideration for simplicity. The secondorder TGV can be written as where the divergences are defined as In fact, is equivalent to the space of all symmetric matrices. The infinite norms in (10) are given by
For the convenience of the derivation of our algorithm, we apply the discrete form in the following and the tensors and vectors are denoted in bold type font. In order to make use of ADMM, we apply an equivalent definition of [17, 22] based on the Lagrange duality. With this definition, we have where denotes an image, belongs to the twodimensional 1tensor field, and denotes the symmetrized derivative operator. Suppose that and denote the th components of and , respectively. Then we have and the th component of is given by where and denote the difference operators in directions and . According to the definition of operators and , and are twodimensional 1tensor and symmetric 2tensor, respectively. Besides, the s of and are defined as and , respectively. The deduction of (13) is given in the Appendix.
Then the constrained secondorder TGVregularized denoising problem (6) can be rewritten as
3. Methodology
3.1. The Augmented Lagrangian Model of Adaptive TGVBased Denoising
Problem (15) can be transformed into an unconstrained problem, with the following discontinuous objective functional: where is the indicator function of the feasible set defined by Note that is a closed Euclidean ball centered at with radius .
The solution of problem (16) suffers from its nonlinearity and nondifferentiability. Referring to the variable splitting, we introduce three auxiliary variables to simplify the solution process of (16): a variable for liberating out from the constraint of the feasible set; a variable and a variable for liberating and out from the nondifferentiable 1norms, respectively. Then problem (16) can be transformed into the following equivalent constrained problem:
In order to liberate out from the feasible set constraint, we introduce auxiliary variable . Similar operation can also be found in [10]. Without this operation, we should resort to an inner iterative scheme to update the regularization parameter.
The corresponding augmented Lagrangian functional of (18) is defined as where , , and are Lagrange multipliers and , , and are penalty parameters which should be positive. According to the classical ADMM, we should solve the following iterative scheme:
3.2. Solution of the Subproblems
With the auxiliary , the subproblem becomes quadratic and irrelevant to the constraint of the feasible set. It allows the following objective: The minimization problem (21) can be solved by the following equation: With the circulant boundary condition of images, we can solve (22) with several FFTs and IFFTs [8, 10].
Following the same way, the subproblem with respect to is also quadratic and we have the objective functional as follows: Then, for , we have and for , we have where and are the combinations of and (, ), respectively. Similar to the solution of (22), problems (24) and (25) can also be solved conveniently through several FFTs and IFFTs under the assumption of the circulant boundary condition.
The subproblem for can be written as Problem (26) can be solved componentwisely through the following 4dimensional shrinkage operation:
The subproblem is given by and it can be solved componentwisely through the following 2dimensional shrinkage operation:
The subproblem with respect to can be written as Consequently, the solution of problem (30) is The solution of falls into two cases according to the range of . On one hand, if we can set , and, obviously, satisfies the feasible set constraint. On the other hand, if (32) is not true, should fulfill the following equation: Substituting (31) into (33), we get The resulting image denoising algorithm is summarized in Algorithm 1 TGV^{2}IDADMM.

The adoption of the variable is essential for the adaptive estimate of the regularization parameter . With the assistance of , is liberated out from the constraint of the feasible set. Thus, the update of is free from the disturbance of the update of , and a closed form for updating is achieved in each step without inner iteration. From functionals (16) and (18) we learn that, by setting , , and , Algorithm TGV^{2}IDADMM will degenerate to a TVbased denoising algorithm, and we denote this case as TGV^{1}IDADMM.
The convergence of Algorithm TGV^{2}IDADMM follows from the convergence analysis for the TVbased ADMM in [11, 25], due to the convex property of . In this paper, we do not repeat the lengthy analysis procedure. However, we have the following essential convergence theorem for the proposed method.
Theorem 1. For fixed , the sequence generated by Algorithm TGV^{2}IDADMM from any initial point converges to , where is the solution of functional (15) and is the regularization parameter corresponding to the feasible set constraint .
4. Experiment Results
In this section, we illustrate the effectiveness of the proposed algorithm on suppressing staircasing effect and removing Gaussian noise in image. Besides, we also show the robustness of the results with respect to the penalty parameters. We performed our algorithm under MATLAB v7.8.0 and Windows 7 on a PC with an Intel Core (TM) i5 CPU at 3.20 GHz and 8 GB of RAM.
The root mean squared error (RMSE) and the peak signaltonoise ratio (PSNR) used in comparison are defined as where is the original image that contains no noise. Besides, in subsections 4.1 and 4.2, we set the penalty parameters as and for TGV^{2}IDADMM ( for TGV^{1}IDADMM) to achieve consistently promising result with fast speed, where BSNR is the blurred signaltonoise ratio defined by BSNR = 10 log_{10} (var()/) (var() denotes the variance of ).
4.1. Staircasing Effect Reduction by the Proposed Method
We first compare Algorithm TGV^{2}IDADMM with Algorithm TGV^{1}IDADMM to illustrate the effectiveness of model in staircasing effect reduction. We use as the stopping criteria for these two algorithms, where denotes the restored result in the th iteration. For the secondorder case, we set , whereas for the oneorder case, we set .
In this experiment, we use a synthetic piecewise affine image shown in Figure 1 as the test image. The original image is contaminated by Gaussian noise of standard variance at first. Then we imply TGV^{2}IDADMM and TGV^{1}IDADMM to remove the noise. Table 1 shows the results in terms of RMSE, PSNR, total iterations, and CPU time. The ground truth, noised, and restored images by the two algorithms are displayed in Figure 1. Furthermore, for better visualization, we additionally provide the threedimensional closeups of the marked regions of the two restored images in Figure 1. From Table 1 we observe that TGV^{2}IDADMM does better than TGV^{1}IDADMM in terms of both RMSE and PSNR. Figure 1 shows that the denoised image of TGV^{2}IDADMM almost contains no artificial edges in affine regions. In contrast, the restored result of TGV^{1}IDADMM contains obvious staircasing effect in affine regions. The threedimensional closedups vividly demonstrate this phenomenon. This illustrates that our TGVbased algorithm is effective in staircasing effect reduction.

(a)
(b)
(c)
Table 1 also shows that, to accomplish the denoising task, TGV^{2}IDADMM usually costs more CPU time than TGV^{1}IDADMM, since model involves much more calculation. However, the cost is worthy due to the impressive improvement on both quantitative and qualitative restoration quality. Figure 2 displays the evolutions of s and PSNRs achieved by the two algorithms. It is learnt that, the regularization parameters of both converge to the optimal points at last, which guarantees the automatic implementation of the two algorithms.
(a)
(b)
4.2. Comparison in Accuracy
In this subsection, we compare TGV^{2}IDADMM with the other two famous adaptive TVbased denoising algorithms: Chambolle’s projection algorithm [23] and Split Bregman algorithm [24], both possessing public online implementations at “http://www.ipol.im/”. Two natural images, Lena and Peppers both of size 512 × 512 shown in Figure 3, are used for comparison. The parameter setting for TGV^{2}IDADMM is the same as that in the previous subsection. We obtain the test results of the two competitors through online experimental operation.
(a)
(b)
We add Gaussian noise of standard variances of 20, 30, and 40 to Lena and Peppers to obtain the noised observations, respectively. Then we apply these three algorithms to restore the noisy images. Table 2 shows the comparison results in terms of RMSE and PSNR. The best result for each comparison item is highlighted in bold type font. Table 2 shows that TGV^{2}IDADMM holds superiority on both RMSE and PSNR for all the tested cases. Figure 4 displays the noised Lena under Gaussian noise of and the restorations by the three algorithms, whereas Figure 5 exhibits the noised Peppers under Gaussian noise of and the corresponding restorations. Figures 4 and 5 demonstrate that TGV^{2}IDADMM obtains results with better visual impression and efficiently suppresses the staircasing effect. In contrast, both TVbased Chambolle’s projection algorithm and TVbased Split Bregman algorithm achieve results with obvious staircasing effect. Since we apply test images with different levels of noise, the robustness of our algorithm towards the noise level is verified to a certain extent.

(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
4.3. Solution Robustness with Respect to the Penalty Parameters
Although the positive assumption of penalty parameters is sufficient for the convergence of ADMM, the results of ADMM are commonly influenced by the choice of the penalty parameters to a certain extent in practice. As suggested by a referee, we add an experiment to show the robustness of the results of TGV^{2}IDADMM with respect to the penalty parameters, under the two denoising background problems mentioned above, that is, the Lena denoising problem under Gaussian noise of and the Peppers denoising problem under Gaussian noise of . We still set and but change from 0.01 to 1 with a step size of 0.01. In Figure 6, we plot PSNR versus for the denoised Lena and Peppers. Figure 6 demonstrates that the optimal should be focalized in [] and its location is robust towards the variation of image and noise level. The results of our method possess sufficient robustness with respect to the variation of penalty parameters to a certain extent, since the absolute error between the maximum and the minimum of PSNR is less than 0.18 dB in the experiment, and this error could not introduce obvious distinction in visual quality. In the former two experiments, the setting of is approximately optimal for the proposed algorithm.
(a) Lena image under Gaussian noise of
(b) Peppers image under Gaussian noise of
5. Concluding Remarks
We propose an adaptive TGVbased model for noise removal in this paper. The variable splitting (VS) and the classical augmented Lagrangian method are used to handle the proposed model. From the experimental results, we observe that the proposed algorithm is effective in suppressing staircasing effect and preserving edges in images, and it is superior to some other famed adaptive denoising methods both in quantitative and in qualitative assessment. Besides, our work can be smoothly generalized to image deblurring problems.
Appendix
The Equivalent Definition of
In discrete version, we have where Therefore, according to the Lagrange duality, we have
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank Editor Fatih Yaman and anonymous referees for their valuable comments. Their help has greatly enhanced the quality of this paper. This work was partially supported by the National Natural Science Foundation of China under Grant nos. 61203189, 61104223, and 61374120 and the National Science Fund for Distinguished Young Scholars of China under Grant no. 61025014.
References
 L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992. View at: Google Scholar
 T. Chan, S. Esedoglu, F. Park, and A. Yip, “Recent developments in total variation image restoration,” in Mathematical Models of Computer Vision, Springer, New York, NY, USA, 2005. View at: Google Scholar
 Y. L. You and M. Kaveh, “Fourthorder partial differential equations for noise removal,” IEEE Transactions on Image Processing, vol. 9, no. 10, pp. 1723–1730, 2000. View at: Publisher Site  Google Scholar
 M. Lysaker, A. Lundervold, and X.C. Tai, “Noise removal using fourthorder partial differential equation with applications to medical magnetic resonance images in space and time,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1579–1589, 2003. View at: Publisher Site  Google Scholar
 T. F. Chan, S. Esedoglu, and F. Park, “A fourth order dual method for staircase reduction in texture extraction and image restoration problems,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 4137–4140, Los Angeles, Calif, USA, September 2010. View at: Publisher Site  Google Scholar
 M. R. Hajiaboli, “An anisotropic fourthorder diffusion filter for image noise removal,” International Journal of Computer Vision, vol. 92, no. 2, pp. 177–191, 2011. View at: Publisher Site  Google Scholar
 T. Liu and Z. Xiang, “Image restoration combining the secondorder and fourthorder PDEs,” Mathematical Problems in Engineering, vol. 2013, Article ID 743891, 7 pages, 2013. View at: Publisher Site  Google Scholar
 Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008. View at: Google Scholar
 N. B. Brás, J. BioucasDias, R. C. Martins, and A. C. Serra, “An alternating direction algorithm for total variation reconstruction of distributed parameters,” IEEE Transactions on Image Processing, vol. 21, no. 6, pp. 3004–3016, 2012. View at: Publisher Site  Google Scholar
 C. He, C. Hu, W. Zhang, B. Shi, and X. Hu, “Fast totalvariation image deconvolution with adaptive parameter estimation via split Bregman method,” Mathematical Problems in Engineering, vol. 2014, Article ID 617026, 9 pages, 2014. View at: Publisher Site  Google Scholar
 T. Goldstein, B. O’Donoghue, S. Setzer, and R. Baraniuk, “Fast alternating direction optimization methods,” UCLA CAM Report, 2012. View at: Google Scholar
 T. Chan, A. Marquina, and P. Mulet, “Highorder total variationbased image restoration,” SIAM Journal on Scientific Computing, vol. 22, no. 2, pp. 503–516, 2001. View at: Publisher Site  Google Scholar
 O. Scherzer, “Denoising with higher order derivatives of bounded variation and an application to parameter estimation,” Computing, vol. 60, no. 1, pp. 1–27, 1998. View at: Google Scholar
 A. Chambolle and P.L. Lions, “Image recovery via total variation minimization and related problems,” Numerische Mathematik, vol. 76, no. 2, pp. 167–188, 1997. View at: Google Scholar
 G. Dal Maso, I. Fonseca, G. Leoni, and M. Morini, “A higher order model for image restoration: the onedimensional case,” SIAM Journal on Mathematical Analysis, vol. 40, pp. 2351–2391, 2009. View at: Publisher Site  Google Scholar
 B. Shi, Z. F. Pang, and Y. F. Yang, “Image restoration based on the hybrid totalvariationtype model,” Abstract and Applied Analysis, vol. 2012, Article ID 376802, 30 pages, 2012. View at: Publisher Site  Google Scholar
 K. Bredies, K. Kunisch, and T. Pock, “Total generalized variation,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 492–526, 2010. View at: Publisher Site  Google Scholar
 Yu. Nesterov, “A method for solving a convex programming problem with convergence rate O(1/k2),” Soviet Mathematics Doklady, vol. 27, pp. 372–376, 1983. View at: Google Scholar
 R. Glowinski and P. Le Tallec, Augmented Lagrangians and OperatorSplitting Methods in Nonlinear Mechanics, Studies in Applied Mathematics 9, SIAM, Philadelphia, Pa, USA, 1989.
 S. Xie and S. Rahardja, “Alternating direction method for balanced image restoration,” IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4557–4567, 2012. View at: Publisher Site  Google Scholar
 W. Deng and W. Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” UCLA CAM Report Cam 1252, 2012. View at: Google Scholar
 W. Guo, J. Qin, and W. Yin, “A new detailpreserving regularity scheme,” UCLA CAM Report cam1304, 2013. View at: Google Scholar
 J. Duran, B. Coll, and C. Sbert, “Chambolle's pojection algorithm for total variation denoising,” Image Processing on Line, vol. 3, pp. 311–331, 2013. View at: Publisher Site  Google Scholar
 P. Getreuer, “RudinOsherFatemi total variation denoising using split Bregman,” Image Processing on Line, vol. 2, pp. 74–95, 2012. View at: Publisher Site  Google Scholar
 C. Wu and X.C. Tai, “Augmented lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 300–339, 2010. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Chuan He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.