Abstract

We propose a fourth-order total bounded variation regularization model which could reduce undesirable effects effectively. Based on this model, we introduce an improved split Bregman iteration algorithm to obtain the optimum solution. The convergence property of our algorithm is provided. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010).

1. Introduction

Image restoration problem is one of the earliest and most classic linear inverse problems [13]. In this class of problems, a noisy indirect observation of an original image is modeled as where is a bounded linear operator representing the convolution and denotes the additive noise.

Equation (1) is a typically ill-posed inverse problem; that is, a small change in will lead to huge deviation in the solution . Hence, to keep the numerical stability, the regularization method known as adding a regularization term to the energy function has been developed. The original scheme introduced by Tikhonov and Arsenin [4] is given by where the nonnegative function which regularizes the solution by enforcing certain prior constrains on original image is known as regularizationpenalty and the which measures the violation of the relation between and its observation is known as fidelity. The scale is called the regularization parameter; it compromises fidelity with penalty. By this Tikhonov regularization method, we can compute stable approximations to the original solution. However, this smoothness penalty model does not preserve the edge, sparsity pattern, and texture well because of its isotropic smoothing properties. To overcome this shortcoming, Rudin et al. [5] proposed the total variation (TV) based regularization scheme (the ROF model) where denotes a bounded subset with Lipschitz boundary, , and represents the distributional derivative of . Many computational methods [511] for solving (3) sprang up in recent years. For the moment we may think of the nonsmooth penalty term as the seminorm. More precisely, As a result, the bounded variation (BV) seminorm is endowed with . Then the Banach space (Ω) is essentially an extension of . This model has been extremely successful in a wide variety of image restoration problems, such as image denoising [5, 12], signal processing [13, 14], image deblurring [15], image decomposition, and texture extraction [16]. However, this model also usually produces staircase effects and new edges that do not exist in the true image.

In [17, 18], the authors concentrate especially on the full norm as the regularization term, compared with the TV regularization, which is preferable due to its ability to preserve edges in the original image during the reconstruction process. Then specify the original problem to the following variation model [18]: where is a closed, convex subset of . The space endowed with the norm is a Banach space. A quadratic regularization term is utilized in model (5) comparing with model (3), it serves two advantages. One advantage is that, for , it provides a coercive term for the subspace of constant function which is in the kernel of the -operator (in this model represents the gradient operator). The other advantage of the quadratic regularization term is that it provides the probability to discriminate the structure of stability results from that of the nonquadratic term.

As mentioned in [1921], this technique preserves edges well, but the obtained images for this model are often piecewise constant. To prevent the staircase effect, we penalize jumps more; this can be achieved by taking the second derivation into account. So in this paper we present an improved model; that is, the second-order diffusive term is replaced by fourth-order diffusive term in the model (5); new model substantially reduces the staircase effect, while preserving sharp jump discontinuities (edges). Here we rewrite the proposed model: where the Frobenius norm of the Hessian matrix is (see [22]) and the is defined as follows [23].

Definition 1. Let be an open subset with Lipschitz boundary. is a subspace of functions such that the following equation is satisfied: where stands for the set of functions in with compact support in .

The proof of the existence, uniqueness, convergence, and stability of our proposed model (6) can be founded in [18].

The organization of the rest of paper is as follows. In Section 2, we give a detailed description of the Bregman iteration method. In Section 3, we elaborate on the analysis of the extended split Bregman iteration method for the proposed model. In Section 4, the convergence analysis is displayed. Numerical experiments intended for demonstrating the effectiveness of our model are provided in Section 5. Finally, concluding remarks are given in Section 6.

Bregman iteration is a concept that originated in function analysis for finding extrema of convex function [24], which was initially introduced and studied by Osher et al. for image processing [25]. Now we will show the general formulation of Bregman iteration technique.

2.1. Bregman Iteration

In [26], Goldstein and Osher considered the generalized constrained minimizations of the form where and defined in are convex functions. The associated unconstrained problem is where is the positive parameter which should be chosen extremely large.

Definition 2. The Bregman distance of function between and is where stands for duality product and is in the subdifferential of at with We assume that is differentiable then problem (11) can be iteratively solved by [24] The convergence analysis of this Bregman iterative scheme was provide in [25]. The computational performance of Bregman iteration relays on how fast the subproblem can be solved. Let , where is a linear operator. As show in [25, 27], iteration (14) can be reformulated as a simplified form This Bregman iteration which was proposed by Osher et al. for TV based image denoising [25] has two advantages [26]; the first one is that this method converges very quickly, especially for problem where contains 1-regularization term. The second advantage is that the parameter in (14) remains constant; so for the purpose of fast convergence, we can choose which minimizes the condition number of the subproblem. Due to the highefficiency and robustness of Bregman iteration method, it has been widely used for image reconstruction [2729].

2.2. Split Bregman

Goldstein and Osher [26] proposed the split Bregman iteration based on the split formulation provided in [30] to attack the general 1-regularized optimization problem (11). In a recent paper [31], this method is used to solve general variational models for image restoration. The crucial point of the split Bregman method is that it could separate the 1 and 2 portions in (11). They converted (11) into the constrained optimization problem where and stand for the convex functions. Then transform it into an unconstrained problem This problem is similar to (11); so they enforced the simplified Bregman iterative algorithm to problem (18) This is called two-phase split Bregman iterative algorithm.

Then pay attention to the subproblem Due to the “split” of the 1 and 2 components of this function, the minimization problem was performed by iteratively minimizing with respect to and separatelyStep  1:    ,Step  2:    .The speed of the split Bregman method is relayed on how quickly the two steps can be solved. A wide variety optimization techniques can be used to solve Step 1, for instance, the Fourier transform method and conjugate gradient method. Step 2 could be attacked with the extremely fast shrinkage formula, namely, where

3. The Proposed Algorithm

Due to [5], estimation procedures are more appropriate for image restoration, and TV norm is essentially norm of derivatives. However, the fourth-order total variation term in model (6) is continuous expression; by utilizing the similar method of discretizing total variation in [32], we can deduce the discrete fourth-order total variation (let represent the operator ):

So the proposed model (6) can be rewritten as follows:

Then we perform the split Bregman iteration to solve problem (24); this yields a constrained problem Obviously, problem (25) is equivalent to the following unconstraint problem: Concretely, the extended split Bregman iterative for solving (26) is depicted as with the update formula for

For more precisely, we derive our algorithm as follows.

Algorithm 3. We have the following steps.(1) Set input value . Set .(2) Update from (3) Update from (4) Update from (5) If stopping criterion holds, output the . Otherwise, set and go to Step 2.

4. Convergence Analysis

In this section, we concentrate on the rigorous convergence proof of our iterative algorithm. Our analysis below is similar to that in [33, 34], where the authors presented the analysis of the unconstrained split Bregman iteration in detail.

We note that the subproblems (27) are convex and differentiable; so the first-order optimality conditions for and are obtained as follows: where . The condition (32) will be used for analyzing the convergence property of our algorithm.

Theorem 4. Assume that there exists a unique solution of problem (26) and . Then the sequence generated by extended split Bregman iteration (27)–(40) holds:

Proof. Reference [17] has shown that problem (26) exists a unique so the first order optimality condition holds where and ; let ; then (32) is equivalent to This illustrates that and are a fixed points of (27).
Let and denote the sequence generated by algorithm (27), and and represent the errors, respectively. Then subtracting every equation in (32) from the corresponding equations in (35), we give the result where ; we take the inner product of the first and second equations in (36) with respect to and separately; then followed by the same manipulations applied to the third equation of (36), we obtain Obviously, the third equation of (37) can be rewritten as Combining the first two equations of (37) with (38), we have By summing the equation bilaterally from to , we get Because , and is convex, then we have The fact that all terms involved in (40) are nonnegative leads to the following expression: With the assumption and letting , inequality (42) suggests the following conclusions: And hence The Bregman distance satisfies Associating this equation with (47), we get Again, together with (46) and the continuous property of , the following expression hold: With the similar skills, we have Combining (50)-(51), we obtain Finally, from (44) and (52), the main results are shown as follows: This proves Theorem 4.

5. Numerical Experiments

In this section, a number of experiments are performed to demonstrate the effectiveness and efficiency of our proposed split Bregman iteration (PSBI) algorithm for the fourth-order diffusive model (6), which will be compared with the split Bregman iteration (SBI) for second-order diffusive model provided in [17]. All experiments are generated in MATLAB 7.10 environment on a desktop with Windows 7 operating system, 3.00 GHz Intel Pentium(R) D CPU, and 1.00 GB memory.

The performance of all algorithms is measured by the improved signal-to-noise ratio (ISNR) and mean squared error (MSE) defined as where , and denote the degraded, original, and recovered images, and and are the sizes of image, respectively. The higher ISNR or the lower MSE, the better quality of the deblurred images. Moreover, the stopping criterion of all algorithms measured by the difference between the consecutive iterations of the deblurred image satisfies the following inequality:

Three classical grayscale images with resolution of pixels in Figure 1 are used for synthetic degradations in our experiments. The blur kernels used for blurring are Gaussian blur (size of pixels, variance of 3), linear motion blur (length of 10 pixels and direction of 30 degrees), out-of-focus blur (size of 10 10 pixels, defocus radius of 4) and uniform blur (size of pixels). Periodic boundary conditions are considered in the following experiments.

In the first experiment, the original image “Cameraman” with complex background is blurred by Gaussian blur, out-of-focus blur, linear motion blur, and uniform blur, respectively. The blurred images are showed in Figures 2(a)2(d), Figures 2(e)2(h) show the deblurred results corresponding to the split Bregman iteration method for all the blur cases, and the selected parameter values corresponding to the SBI method are the same with those the authors given out in [17]. In order to give explicit comparison, we show the restored images obtained by our PSBI method in Figures 2(i)2(l), and the selected parameter values are , respectively. Through comparing the “sky” in those deblurred images we can see that the PSBI method recovers more details (removes the stair effect) than SBI method. However, due to the fourth-order diffusive term, our proposed method costs a little more time than the SBI method. Table 1 gives the CPU times, ISNR values, and MSE values of the SBI method and PSBI method for all the blur kernels mentioned above, which shows that our algorithm works well for images with complex background.

In the next test, we use Gaussian blur and out-of-focus blur to degrade the “Boat” image and then run the two algorithms many times to obtain the best results. In PSBI method, the selected parameters and iterations are for Gaussian blur and for out-of-focus blur, respectively. Figure 3 shows the recovered results of the two methods. And for the purpose of better illustrations, we provide a close-up of image region in Figure 4. We can see from Figure 4 that the restored images estimated by PSBI are better when compared to the ones deblurred by SBI; for example, see the “letter” in the stern of “Boat.” Meanwhile, Table 2 illustrates the ISNR values obtained by PSBI are higher than those obtained by SBI while the MSE values are lower. So, it is clear that our method can effectively reduce the “staircase” which always appeals in the two-order diffusive model.

Figures 5 and 6 show the comparison between PSBI and SBI algorithm for image “Lena.” Firstly we blur the “Lena” by linear motion blur and uniform blur and then select the deblurred result that looks best after carefully tuning the parameters; for linear motion blur, the parameter values and iterations are , and for uniform blur, they are . It can be seen from Figures 5 and 6 that many staircase effects in the smooth regions can be detected obviously in the images restored by the SBI model while seldom appear in our results, and the upper-left corners in image “Lena” deblurred by PSBI are more visual comparing with that obtained by SBI model. Table 3 shows that the restored images obtained by our model have higher ISNR values and lower MSE values than the second-order model.

6. Conclusion

In this paper, we propose the fourth-order total bounded variation regularization based image deblurring model and exploit the split Bregman iteration method to solve this new model. The convergence analysis of our algorithm is provided. Numerical experiments show that our algorithm works well for images with complex background or simple background. In our synthetic experiments, the fourth-order diffusive model yields better results than the second-order diffusive model. It is believed that the proposed model can be extended to further applications in image processing and computer vision.

Acknowledgments

The authors would like to thank the referees very much for their valuable comments and suggestions.