Abstract

The total-variation (TV) regularization has been widely used in image restoration domain, due to its attractive edge preservation ability. However, the estimation of the regularization parameter, which balances the TV regularization term and the data-fidelity term, is a difficult problem. In this paper, based on the classical split Bregman method, a new fast algorithm is derived to simultaneously estimate the regularization parameter and to restore the blurred image. In each iteration, the regularization parameter is refreshed conveniently in a closed form according to Morozov’s discrepancy principle. Numerical experiments in image deconvolution show that the proposed algorithm outperforms some state-of-the-art methods both in accuracy and in speed.

1. Introduction

Digital image restoration, which aims at recovering an estimate of the original scene from the degraded observation, is a recurrent task with many real-world applications, for example, remote sensing, astronomy, and medical imaging. During acquisition, the observed images are often degraded by relative motion between the camera and the original scene, defocusing of the lens system, atmospheric turbulence, and so forth. In most cases, the degradation can be modeled as a spatially linear shift invariant system, where the original image is convolved by a spatially invariant point spread function (PSF) and contaminated with Gaussian white noise [1].

Without loss of generality, we assume that the digital gray-scale images used throughout this paper have an domain and are represented by vectors formed by stacking up the image matrix rows. So the th pixel becomes the th entry of the vector. Then, in general, the degradation process can be modeled as the following discrete linear inverse problem: where and are the observed image and the original image, respectively, both expressed in vectorial form, is the convolution operator in accordance with the spatially invariant PSF, which is assumed to be known, and is a vector of zero mean Gaussian white noise of variance . In most cases, is ill-conditioned so that directly estimating from is of no possibility. The solution of (1) is highly sensitive to noise in the observed image and it becomes a well-known ill-posed linear inverse problem (IPLIP). The inverse filtering in a least square form, which tries to solve this problem directly, usually results in an estimation of no usability.

If we get some prior knowledge such as prior distribution or sparse quality about the original image, we can incorporate such information into the restoration process via some sort of regularization [2]. This makes the solution of IPLIP possible. A large class of regularization approaches leads to the following minimization problem: where is the estimate of and is the so-called regularization parameter. The first term of (2) represents the regularization term, whereas the second represents the data-fidelity term. The regularization has the quality of numerical stabilizing and encourages the result to have some desirable properties. The positive regularization parameter plays the role of balancing the relative weight of the two terms.

Among the various regularization methods, the total-variation (TV) regularization is famed for its attractive edge preservation ability. It was introduced into image restoration by Rudin et al. [3] in 1992. From then on, the TV regularization has been arousing significant attention [47], and, so far, it has resulted in several variants [810]. The objective functional of the TV restoration problem is given by where the first term is the so-called TV seminorm of and (its detailed definition is in Section 2) is the discrete gradient of at pixel . In minimization functional (3), the TV is either isotropic if is 2-norm or anisotropic if it is 1-norm. We emphasize here that our method is applicable to both isotropic and anisotropic cases. However, we will only treat the isotropic one for simplicity, since the treatment for the other one is completely analogous. Despite the advantage of edge preservation, the minimization of functional (3) is troublesome and it has no closed form solution at all. Various methods have been proposed to minimize (3), including time-marching schemes [3], primal-dual based methods [1113], fixed point iteration approaches [14], and variable splitting algorithms [1517]. In particular, the split Bregman method adopted in this paper is an instance of the variable splitting based algorithms.

Another critical issue in TV regularization is the selection of the regularization parameter , since it plays a very important role. If is too large, the regularized solution will be undersmoothed, and, on the contrary, if is too small, the regularized solution will not fit the observation properly. Most works in the literature only consider a fixed and, when applying these methods to image restoration problems, one should adjust manually to get a satisfying solution. So far, a few strategies are proposed for the adaptive estimation of parameter , for example, the L-curve method [18], the variational Bayesian approach [19], the generalized cross-validation (GCV) method [20], and Morozov’s discrepancy principle [21].

If the noise level is available or can be estimated first, Morozov’s discrepancy principle is a good choice for the selection of . According to this rule, the TV image restoration problem can be described as where with is the feasible set in accordance with the discrepancy principle. Although it is much easier to solve the unconstrained problem (3) than the constrained problem (4), formulation (4) has a clear physics meaning ( is proportional to the noise variance) and this makes the estimation of easier. In fact, referring to the theory of Lagrangian methods, if is a solution of constrained problem (4), it will also be a solution of (3) for a particular choice of , which is the Lagrangian multiplier corresponding to the constraint in (4). To minimize (4), we have either for or for . In fact, if , minimizing (3) is equivalent to minimizing , which means that the solution is a constant image. Obviously, this will not happen to a nature image. Therefore, only will happen in practical applications.

There exists no closed form solution of functional (3) or (4), and, up to now, several papers pay attention to the numerical solving of problem (4). In [22], the authors provided a modular solver to update for making use of existing methods for the unconstrained problems. Afonso et al. [17] proposed an alternating direction method of multipliers (ADMM) based approach and suggested using Chambolle’s dual method [23] to adaptively restore the degraded image. In [13], Wen and Chan proposed a primal-dual based method to solve the constrained problem (4). The minimization problem was transformed into a saddle point problem of the primal-dual model of (4), and then the proximal point method [24] was applied to find the saddle point. When dealing with the updating of , they resorted to a Newton’s inner iteration. All these methods mentioned above have the same limitation: in order to adaptively update , an inner iteration is introduced, and this results in extra computing cost.

In this paper, based on the split Bregman scheme, we propose a fast algorithm to solve the constrained TV restoration problem (4). When referring to the variable splitting technique, we introduce two auxiliary variables to represent and the TV norm, respectively, and therefore the constrained problem (4) can be solved efficiently with a separable structure without any inner iteration. Differing from the previous works focusing on the adaptive regularization parameter estimation in TV restoration problems, our method involves no inner iteration and adjusts the regularization parameter in a closed form in each iteration. Thus, fast computation speed is achieved. The simulation results in TV restoration problems indicate that our method outperforms some famous methods in accuracy and especially in speed. According to the equivalence of split Bregman method, ADMM, and Douglas-Rachford splitting algorithm under the assumption of linear constraints [2527], our algorithm can also be seen as an instance of ADMM or Douglas-Rachford splitting algorithm.

In the rest of this paper, the basic notation is presented in Section 2. Section 3 gives the derivation leading to the proposed algorithm and some practical parameter setting strategies. In Section 4, several experiments are reported to demonstrate the effectiveness of our algorithm. Finally, Section 5 draws a short conclusion of this paper.

2. Basic Notation

Let us describe the notation that we will be using throughout this paper. Euclidean space is denoted as , whereas Euclidean space is denoted as . The th components of and are denoted as and , respectively. Define inner products , , and norm , . For each , we define , with where are matrices in the vertical and horizontal directions, and obviously it holds that . is a tow-row matrix formed by stacking the th rows of and together. Define the global first-order finite difference operator as and we consider . From (6) and (7), we see that the periodic boundary condition is assumed here.

Given a convex functional , the subdifferential of at is defined as And the Bregman distance between and is defined as From the definition of Bregman distance, we learn that it is positive all the time.

3. Methodology

3.1. Deduction of the Proposed Algorithm

We refer to the variable splitting technique [28] for liberating the discrete operator out from nondifferentiability and simplifying the regularization parameter’s updating. An auxiliary variable is introduced for , and another auxiliary variable is introduced to represent (or for , resp.). Therefore, functional (3) is equivalent to Define Bregman functional Then the Bregman distance of is According to the split Bregman method [16, 29], we obtain the following iterative scheme: if we define that for any elements  and  , and then, according to (14)–(16), it holds that and we obtain the following iterative scheme: In iterative scheme (19), the problem yielding exactly is difficult, since it needs an inner iterative scheme. Here, we adopt the alternating direction method (ADM) to approximately calculate , , and in each iteration and this leads to the following iterative framework: In the following, we will discuss how to solve problems (20)–(22) efficiently.

The minimization subproblem with respect to is in the form of least square. From functional (20), we obtain

Under the periodic boundary condition, matrices , , and are block-circulant, so they can be diagonalized by a Discrete Fourier Transforms (DFTs) matrix. Using the convolution theorem of Fourier Transforms, we obtain where denotes the DFT, “” denotes complex conjugate, and “” represents componentwise multiplication. The reciprocal notation is also componentwise here. Therefore, problem (20) can be solved by two Fast Fourier Transforms (FFTs) and one inverse FFT in operations.

Functional (21) is a proximal minimization problem and it can be solved componentwise by a two-dimension shrinkage as follows: During the calculation, we employ the convention 0 × (0/0) = 0 to avoid getting results of no meaning.

When dealing with problem (22), we assume that first. It is obvious that is related and it plays the role of . Therefore, in each iteration, we should examine whether holds true, that is, whether meets the discrepancy principle.

The solutions of and fall into two cases according to the range of .(1)If holds true, we set and . Obviously this satisfies the discrepancy principle.(2)If , according to the discrepancy principle, we should solve equation Since the minimization problem (22) with respect to is quadratic, it has a closed form solution Substituting in (29) with (30), we obtain

The above discussion can be summed up by Algorithm 1.

Input: f, H, .
(1)Initialize , , , . Set and and
(2)while stopping criterion is not satisfied, do
(3) Compute according to (26);
(4) Compute according to (27);
(5)if (28) holds, then
(6)  , and ;
(7)else
(8) Update and according to (31) and (30);
(9)end if
(10)Update and according to (23) and (24);
(11) ;
(12)end while
(13)return   and .

In algorithm APE-SBA, by introducing the auxiliary variable , is liberated out from the constraint of the discrepancy principle, and consequently a closed form to update is obtained without any inner iteration. This is the major difference between APE-SBA and the methods in [13] and [17]. Since the procedure of solving (26) corresponding to the subproblem consumes the most, the calculation cost of our algorithm is about FFT operations. In fact, our algorithm is an instance of the classical split Bregman method, so the convergence of it is guaranteed by the theorem proposed by Eckstein and Bertsekas [30]. We summarize the convergence of our algorithm as follows.

Theorem 1. For , the sequence generated by Algorithm APE-SBA from any initial point converges to , where is a solution of the functional (10). In particular, is the minimizer of functional (4), and is the Lagrange multiplier corresponding to constraint according to the unconstrained problem (3).

3.2. Parameter Setting

In this paper, the noise level is denoted by the following defined blurred signal-to-noise ratio (BSNR) where denotes the mean of .

In minimization problem (4), the noise dependent upper bound is very important, since a good choice of it can constrain the error between the restored image and the original image to a reasonable level. To our best knowledge, the choice of this parameter is an open problem which has not been solved theoretically. One approach to choose is referring to the equivalent degrees of freedom (DF), but the calculation of DF is a difficult problem and we can only get an estimate of it. A simple strategy of choosing is to employ a curve approximating the relation between the noise level and . By fitting experimental data with a straight line, in this paper, we suggest setting Besides the setting of , the choice of and is essential to our algorithm. We suggest setting = 10(BSNR/10−1) × , where = 1. This parameter setting is obtained by large numbers of experiments. Actually, , > 0 is sufficient for the convergence of the proposed algorithm, but why and play different important role when the BSNR varies? The reason is that, when the BSNR becomes higher, the distance between and is nearer. From minimization problem (10), we learn that auxiliary variable plays the role of and a higher BSNR means a larger .

4. Numerical Results

In this section, two experiments are presented to demonstrate the effectiveness of the proposed method. They were performed under MATLAB v7.8.0 and Windows 7 on a PC with Intel Core (TM) i5 CUP (3.20 GHz) and 8 GB of RAM. The improved signal-to-noise ratio (ISNR) is used to measure the quality of the restoration results. It is defined as During the experiments, the four images shown in Figure 1 were used; they are named Cameraman, Lena, Shepp-Logan phantom, and Abdomen all of size 256 × 256.

4.1. Experiment 1

In this experiment, we examine whether the regularization parameter is well estimated by the prosed algorithm. We compare APE-SBA with some famous TV-based methods in the literature and they are denoted by BFO [5], BMK [19], and LLN [20]. We make use of MATLAB commands “fspecial (“average”, 9)” and “fspecial (“Gaussian”, [9 9], 3)” to blur the Lena, Cameraman, and Shepp-Logan phantom images first, and then the images are contaminated with Gaussian noises such that the BSNRs of the observed images are 20 dB, 30 dB, and 40 dB. We adopt as the stopping criteria for our algorithm, where is the restored image in the th iteration.

Table 1 presents the ISNRs of the restoration results of different methods. Symbol “—” means that the results are not presented in the original reference, and bold type numbers represent the best results among the four methods. From Table 1, we see that our algorithm is more competitive than the other three and only in one case our result is worse than but close to the best. This also indicates that the regularization parameter obtained by our method is good.

4.2. Experiment 2

In this subsection, we compare our algorithm with the other two state-of-the-art algorithms: the primal-dual based method in [13], named AutoRegSel, and the ADMM based method in [17], named C-SALSA. The stopping criterion of all methods is or the number of iterations is larger than 1000. We consider the three image restoration problems adopted in [17]. In the first problem, the PSF is a 9 × 9 uniform blur with noise variance 0.562 (Prob. 1); in the second problem, the PSF is a 9 × 9 Gaussian blur with noise variance 2 (Prob. 2); in the third problem, the PSF is given by with noise variance 2 (Prob. 3), where .

The plots of ISNR (in dB) versus runtime (in second) are shown in Figure 2. Table 2 presents the ISNR values, the number of iterations, and the total runtime to reach convergence. We again use the bold type numbers to represent the best results. From the results, we see that APE-SBA produces the best ISNRs compared with the other methods within the least runtime. Besides, in most cases, APE-SBA obtains the best ISNR within the least iterations. Only when dealing with the Abdomen image under Prob. 2, APE-SBA takes more iterations but less runtime to reach convergence than C-SALSA, and the total iteration number for these two is close to each other. For achieving the adaptive image restoration, both C-SALSA and AutoRegSel introduce in an inner iterative scheme, whereas APE-SBA contains no inner iteration. Obviously, the superiority in speed of our method will be enlarged when the image size becomes larger. Figure 3 shows the blurred image and the restored results by different methods in Prob. 2 of the Abdomen image. Our algorithm results in the best ISNR, and, for other problems in Experiment 2, we obtain the similar results.

5. Conclusions

We developed a split Bregman based algorithm to solve the TV image restoration/deconvolution problem. Unlike some other methods in the literature, without any inner iteration, our method achieves the updating of the regularization parameter and the restoration of the blurred image simultaneously, by referring to the operator splitting technique and introducing two auxiliary variables for both the data-fidelity term and the TV regularization term. Therefore, the algorithm can run without any manual interference. The numerical results have indicated that the proposed algorithm outperforms some state-of-the-art methods in both speed and accuracy.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61203189 and 61304001 and the National Science Fund for Distinguished Young Scholars of China under Grant 61025014.