Abstract
Image restoration is one of the most fundamental issues in imaging science. Total variation regularization is widely used in image restoration problems for its capability to preserve edges. In this paper, we consider a constrained minimization problem with double total variation regularization terms. To solve this problem, we employ the split Bregman iteration method and the Chambolle’s algorithm. The convergence property of the algorithm is established. The numerical results demonstrate the effectiveness of the proposed method in terms of peak signaltonoise ratio (PSNR) and the structure similarity index (SSIM).
1. Introduction
Image restoration is a fundamental problem in the literature of image processing. It plays an important role in various areas such as remote sensing, astronomy, medical imaging, and microscopy [1, 2]. Many image restoration tasks can be posed as linear inverse problems of the form: where is a blurring matrix constructed from a discretized point spread function (PSF), is an original grayscale image, is a degraded observation, and is additive noise. We remark that the matrix has special structure that can be exploited in computations, when the special boundary conditions such as periodic and Dirichlet boundary conditions are imposed [3, 4]. In this work, the PSF is assumed to be known. In fact, if the PSF is unknown, there are a variety of means of techniques available for estimating it [5, 6].
Image restoration problems are frequently ill conditioned; thus, the straightforward solution of (1) typically does not yield a meaningful approximation [7, 8]. In order to avoid this difficulty, one typical method is to replace the linear system (1) by a nearby system that is less sensitive to the error in and considers the computed solutions of the latter system. This replacement is known as regularization.
One of the most popular regularization approaches is Tikhonov regularization [9] which seeks to minimize a penalized least squares problem of the form: where the first term is the data fidelity of the solution , and the regularization term restricts smoothness of the solution. The positive regularization parameter plays the role of balancing the tradeoff between the fidelity and noise sensitivity. The regularization operator is a carefully chosen matrix, often the identity matrix or a discrete approximation of the first or second order derivative operator.
In application to image processing, however, the Tikhonov regularization can produce poor solutions (with overly smoothed edges) when the desired solution comes from an image with edges; that is, it overly penalizes discontinuities in the solutions [10]. In this regard, Rudin et al. [11] proposed a total variation (TV) regularization which has the ability to preserve edges well and remove noise at the same time. The resulting model (commonly referred to as the RudinOsherFatemi (ROF) model) has been proven to be successful in a wide range of applications in image processing [12]. We should note that there are many other edgepreserving restoration techniques in the literature, such as the anisotropic diffusion methods of Perona and Malik for denoising [13], morphological wavelets, and dyadic treebased edgepreserving method proposed by Xiang and Ramadge [14]. In this paper, we focus on the the minimization problem with TV regularization where denotes the discrete TV norm. To define the discrete TV norm, we first introduce the discrete gradient [15]: with for , and represents the value of pixel in the image (the th entry of the vector ). Then the discrete TV norm of is defined as follows: where for every .
In recent years, a great many of algorithms have been developed for total variation based image restoration and proved to be effective for reducing blur and noise while preserving edges. In the original TV regularization paper [11], the authors proposed a time marching scheme to solve the associated EulerLagrange equation of (3). The drawback of this method is very slow due to stability constraints. Later, Vogel and Oman [16] proposed a lagged diffusivity fixed point method to solve the same EulerLagrange equation of (3). They proved that this method had a global convergent property and was asymptotically faster than the explicit time marching scheme [17]. In [18], Chan et al. applied the Newton’s method to solve the nonlinear primaldual system of the system (3). Chambolle [15] considered a dual formulation of the TV denoising problem and proposed a semiimplicit gradient descent algorithm to solve the resulting constrained optimization problem. This method is globally convergent with a suitable step size. Recently, Wang et al. [19] proposed a fast total variation deconvolution method which uses splitting technique and constructs an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. In [20], Goldstein and Osher proposed the novel split Bregman iterative algorithm to deal with the artificial constraints; their method has several advantages such as fast convergence rate and stability.
More recently, Huang et al. [2] proposed a minimization problem of the form where and are positive regularization parameters. The authors employed an alternating minimization algorithm to solve the system (7). The numerical results on image restoration show the efficiency of their method. The idea of this method is similar to the one proposed in [19]. Both of them use the penalty method by introducing an auxiliary variable.
In [2], the minimization problem (7) can be solved by two steps: one is the deblurring step which is employed by the Tikhonov regularization and the other step is denoising. Although the noise can be removed to a certain extent in the second step, we may lose some details before the denoising step because of the Tikhonov regularization used in the first step (deblurring), as we know that Tikhonov regularization penalizes edges.
In [21], Chavent and Kunisch considered a total bounded variation regularization minimization problem given by where are both positive parameters. The authors proved that the solution of system (8) is unique. In [22], Hinrermüller and Kunisch applied the semismooth Newton method to solve the Fenchel predual of (8). Numerical results for image denoising and zooming/resizing showed the efficiency of their approach. In [23], Liu and Huang introduced an extended split Bregman iteration to solve the minimization problem (8). Numerical simulations illustrated the excellent reconstruction performance of their method.
Note that the unconstrained problem (3) is equivalent to the following constrained minimization problem: where . Then using penalty method, we obtain proposed minimization problem as follows: where , , and are positive regularization parameters. We note that the problem (9) is the same with the problem (7) if we set the parameter . The minimization problem (10) can be rewritten as follows: The method for solving the minimization problem (11) will be discussed in Section 2. Our numerical results will show that the proposed method yields state of the art results both in terms of SSIM and PSNR.
This paper is outlined as follows. In the next section, we first give a brief introduction of the split Bregman method, and then we propose an iterative algorithm for solving (10). The convergence property of the proposed method is given in Section 3. In Section 4, we present numerical experiments to show the efficiency of the proposed method. Finally, the concluding remarks can be found in Section 5.
2. Alternating Minimization Iterative Scheme
In this section, we derive an algorithm to solve the minimization problem (11). Before we discuss the alternating iterative algorithm for solving (11), we would like to give a brief introduction of the split Bregman iteration [20].
2.1. Split Bregman Iteration
2.1.1. Bregman Iteration
Bregman iteration is a concept that originated in functional analysis for finding minimizer of convex functionals [24]. Osher et al. [25] first applied the Bregman iteration to the ROF model for denoising problem in image processing. The basic idea of the Bregman iteration is to transform a constrained optimization problem to an unconstrained problem. The objective functional in the transformed unconstrained problem is defined by means of the Bregman distance of a convex functional. Suppose the unconstrained problem is formulated as where is a convex function and is convex and differentiable. The Bregman distance of a convex function at the point is defined as the following (nonnegative) quantity: where ; that is, is one of the subgradients of at . Then the authors in [25] employed the Bregman iteration method to solve the unconstrained problem (11). Assume , and then the Bregman iteration method is to alternatively iterate the following scheme: As shown in [25, 26], when is linear, the iteration (13) can be reformulated into the simplified method This Bregman iteration technique has mainly two advantages over tradition penalty function/continuation methods. One is that it converges very quickly when applied to certain types of objective functions, especially for problems where contains an regularization term. The other advantage is that the value of in (14) remains constant. See [20, 25, 26] for further details.
2.1.2. Split Bregman Iteration
In [20], the authors considered the problem where and are convex and differentiable. To solve the problem (16), they convert the constrained problem into an unconstrained problem:
Let and , and the aforementioned Bregman iteration (14) can be similarly applied to (17). Then, they obtained the following elegant twophase iterative algorithm (split Bregman iteration scheme).
Split Bregman iteration: The split Bregman iteration has stable convergence property, and it is extremely fast and very simple to program. For more details on split Bregman and its applications, see [20, 23, 27, 28].
2.2. Proposed Alternating Iteration Scheme
In this section, we propose an alternating minimization algorithm to solve the problem (10). Given an initial , we get for . As a matter of convenience, we express the relationship between and as below: we denote for simplicity, where .
Considering , the minimization problem (10) is reduced to a minimization problem with respect to Although there are many methods to solve (22), we focus on the split Bregman iteration here. Let , and . According to (18) and (19), we have Clearly, the minimization with respect to and in (23) is decoupled, and thus they can be solved separately. We perform the subminimization problem (23) efficiently by iteratively minimizing the following subproblems with respect to and separately: The minimizer is given by normal equations: where is the identity matrix and represents the adjoint of . When an appropriate boundary condition is given, the normal equation (26) can be solved by fast algorithms. In this paper, we impose the periodic boundary condition, and then the matrix are all block circulant [4, 8], which can be diagonalized by the twodimensional discrete Fourier transform [19]. By applying the convolution theorem, we obtainwhere “*” denotes complex conjugacy, “” denotes componentwise multiplication, , , and the division is componentwise as well.
The minimizer of (25) can be determined by the following shrinkage formula:
To sum up, we get Algorithm 1 for solving the subproblem (22).

Considering , the outer minimization problem can be interpreted as the TV minimization scheme to denoise the recovered generated by the previous step. The minimum problem is as follows: There are several efficient methods for solving this problem, such as the primaldual method [18, 29], the lagged diffusivity fixed point iteration proposed by Vogel and Oman [16], the semismooth Newton method [30], and Chambolle’s dual algorithm [15]. For the simplicity of Chambolle’s dual algorithm, we adopt it here for TV denoising problem (29). The idea given by Chambolle is to replace the optimization of the image by the optimization of a vector field that is related to by . For a noisy image , the vector field is the one that minimizes where is the dual variable at the th pixel location, is the concatenation of all , and the discrete divergence of is given by with . The vector is the concatenation of all . For simplicity, we denote . The iterative scheme proposed by Chambolle for computing the optimal solution is as follows: where is the step size and is the th iterate of the iterative method for minimizer; see [15] for more details. After the minimizer of the constrained optimization problem in (31) is determined, the denoised image can be computed by
In summary, we obtain Algorithm 2 by using alternating minimization scheme to solve the minimization problem (10).
3. Convergence Analysis
In this section, we make use of a theorem proposed in [31] to give the convergence property of the proposed algorithm. The theorem is given by the following.
Theorem 1 (see [31]). Let be a averaged nonexpansive operator and the set of fixed points of be nonempty. Then for any , the sequence converges weakly to a fixed point in .
Definition 2. An operator is called nonexpansive if, for all and , Given a nonexpansive operator , let ; for some , the operator is said to be average.
Definition 3. An operator is called inverse strongly monotone (ism) if there is , such that Let be complement of the operator , and then we can easily get the following identity: An operator is called firmly nonexpansive if it is 1ism.
Lemma 4 (see [31]). An operator is nonexpansive if and only if its complement is ism. If is ism and , then the operator is ism.
Lemma 5. An operator is average nonexpansive if and only if its complement is ism.
Proof. Firstly, suppose is average; from Definition 2, there exists a nonexpansive such that , and then . Since is nonexpansive, from Lemma 4, we have that is ism and is ism.
Now we assume that is ism, we write , for , from Lemma 4, we have that is ism and is nonexpansive, and then from Definition 2, the operator is average.
Lemma 6 (see [32]). Let be convex and semicontinuous and . Suppose is defined as follows: Define such that for every . Then and are firmly nonexpansive.
Theorem 7. Let and be positive numbers. Suppose is defined as follows: Define such that for every . Then is averaged nonexpansive.
Proof. Let , for every , the minimum in (22) is achieved at a unique point which is characterized by the inclusion from the property of subdifferential of , , and we have the following inequalities: Adding these two inequalities, we obtain It is obvious that we have the following inequality: From (43) and (44), we obtain Then from Definition 3, the operator is firmly nonexpansive. Analogously we can easily obtain that the operator is also firmly nonexpansive. Therefore, it follows from Lemma 5 that the operator and is averaged nonexpansive.
Corollary 8. The operator is average nonexpansive.
Proof. From Lemmas 5 and 6, and Theorem 7, we know that and are both averaged nonexpansive operators, and there exist nonexpansive operators and such that Thus we have Set , then for any and , Consequently, is nonexpansive, and we rewrite as It follows from Definition 2 that the operator is averaged.
According to Theorem 1 and Corollary 8, we conclude that for any initial guess , generated by (20) converges to the minimizer of in (10).
4. Numerical Experiments
In this section, we present several numerical experiments to illustrate the behavior of the proposed method for image restoration problems. The quality of the restoration results by different methods is compared quantitatively by using the PSNR and SSIM. PSNR is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The higher PSNR value, the higher image quality. The SSIM is a wellknown quality metric used to measure the similarity between two images. The method is developed by Wang et al. [33] and is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. The higher SSIM value, the better restoration. We also use blur signaltonoise ratio (BSNR) to describe how much noise is added in the blurry image. Suppose , and are the original image, the blurred and noisy image, the restored image, and the noise, respectively. The PSNR, SSIM, and BSNR are defined as follows [2, 34]: where is the maximum possible pixel value of the image (i.e., when the pixels are represented by using 8 bits per sample, this is 255): where The first term in (52) is the luminance comparison function which measures the closeness of the two images’ mean luminance ( and ). The unique maximum of this factor equals 1 if and only if . The second term is the contrast comparison function which measures the closeness of the contrast of the two images. Here the contrast is measured by the standard deviation and . This term achieves maximum value 1 if and only if . The third term is the structure comparison function which measures the correlation coefficient between the two images and . Note that is the covariance between and . The positive values of the SSIM index are in . A value of 0 means no correlation between images, and 1 means that . The positive constants , and can be thought of as stabilizing constants for nearzero denominator values. In the following experiments, we will also use SSIM index map to reveals areas of high/low similarity between two images, the whiter SSIM index map, the closer between the two images. We refer the reader to see [33, 34] for further details on SSIM and SSIM index map.
The BSNR is given by
In the following experiments, we compare our proposed method (we call the proposed method FNDTV later) with FastTV [2]. For FastTV, based on the suggestions in [2], we fixed its parameters for BSNR = 40 dB and for BSNR = 30 dB, and we determine the best value of such as the restored images with best performance. For our method, we also determine the best value of the regularization parameters to give the best performance.
The stopping criterion of the proposed method is that the relative difference between the successive iteration of the restored image should satisfy the following inequality: where is the restored image at the th iteration of the proposed method. We set in all tests for both methods.
Four test images, “Cameraman,” “Bridge,” “Lena,” and “Resolution Chart,” which are commonly used in the literature, are shown in Figure 1. We test several kinds of blurring kernels including average, motion, gaussian, and outoffocus. These different blurring kernels can be generated by the Matlab builtin function . In all tests, we add the Gaussian white noise of different BSNR to the blurred images.
(a)
(b)
(c)
(d)
All experiments are carried out on Windows XP 32bit and Matlab v7.10 running on a desktop equipped with an Intel Core2 Duo CPU 2.93 GHz and 2 GB of RAM.
4.1. Average Blur Example
In this example, we consider the wellknown “Cameraman” image () which is shown in Figure 1(a). The image is blurred by a box average kernel and contaminated by BSNR = 40 dB Gaussian noise. The blurred and noisy image is shown in Figure 2(a).
(a)
(b)
(c)
(d)
(e)
(f)
Figures 2(b)–2(d) show the restored images by FNDTV and FastTV. We can see that the visual quality of reconstruction by FNDTV is slightly better than the outcome of FastTV. From Table 1, it is not difficult to see that both of PSNR and SSIM of the recovered image by FNDTV are higher than the one obtained by FastTV.
We also show the SSIM index maps of the restored images recovered by the two methods in Figures 2(e)–2(f), and the map can deliver more information about the quality degradation of the restored images. In contrast, the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV.
4.2. Motion Blur Example
Motion blur is considered in this example. The observed image is the socalled “Resolution Chart” [11], it is degraded by a motion bur with length 7 and contaminated by a white Gaussian noise with BSNR = 30 dB. The degraded image is shown in Figure 3(a). The restored images by FastTV and FNDTV are presented in Figures 3(b) and 3(c). We also report the PSNR and SSIM values by these methods in Table 1. We see that both the PSNR and SSIM values of the restored image by the FNDTV method are higher than FastTV. In addition, the SSIM map obtained by the proposed method is slightly whiter than the map by FastTV.
(a)
(b)
(c)
(d)
(e)
(f)
4.3. Gaussian Blur Example
The original image “Bridge” is shown in Figure 1(c). The blurred and noisy image is degraded by a Gaussian blur with radius 3 and standard deviation and then contaminated by Gaussian noise with BSNR = 40 dB. Figure 4(a) shows the blurred and noisy observation. The restored images obtained by FastTV and FNDTV are shown in Figures 4(b) and 4(c). The numerical results of two different methods in terms of PSNR and SSIM are given in Table 1.
(a)
(b)
(c)
(d)
(e)
(f)
From Table 1, it is not difficult to see that the PSNR and SSIM of the restored image by FNDTV are higher than the one obtained by FastTV.
4.4. OutofFocus Blur Example
This example consists in restoring the image “Lena” degraded by an outoffocus blur with radius 3 and contaminated by BSNR = 30 white Gaussian noise. The image “Lena” is a good test image because it has a nice mixture of detail, flat regions, shading area, and texture and has been widely used in the literature to test image restoration algorithms [19]. Figure 5(a) shows the blurred and noisy image. The restored results by both methods are shown in Figures 5(b)–5(d), and Table 1 lists the PSNR and SSIM values. From the table, we observe that both the PSNR and SSIM values of the restored image by the proposed method are better than what obtained by FastTV.
(a)
(b)
(c)
(d)
(e)
(f)
5. Conclusion
In this paper, we have presented a new efficient algorithm for image restoration based on total variation regularization. We give a convergence proof for the algorithm, and the numerical results show that the proposed method is competitive with the state of the art method FastTV. In addition, an important feature is that the proposed method can suppress noise very well while it can preserve details of the restored image. We will consider extending the proposed method for color or other multichannel image restoration in the future.
Conflict of Interests
All of the coauthors do not have a direct financial relation with the trademarks mentioned in our paper that might lead to a conflict of interests for any of the coauthors.
Acknowledgments
The work of Jun Liu, TingZhu Huang, and Si Wang is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci., and Tech. Research Project (2012GZX0080). The work of XiaoGuang Lv is supported by Nature science foundation of Jiangsu Province (BK20131209).