Abstract

To preserve the edge, multiplicative noise removal models based on the total variation regularization have been widely studied, but they suffer from the staircase effect. In this paper, to preserve the edge and reduce the staircase effect, we develop a hybrid variational model based on the variable splitting method for multiplicative noise removal; the new model is a strictly convex objective function which contains the total variation regularization and a modified regularization term. We use the linear alternative direction method to find the minimal solution and also give the convergence proof of the proposed algorithm. Experimental results verify that the proposed model can obtain the better results for removing the multiplicative noise compared with the recent method.

1. Introduction

Image denoising is one of the fundamental problems in image processing and computer vision fields. A real recorded image may be distorted by some unexpected random noise; for example, Synthetic Aperture Radar (SAR) [1, 2] and Ultrasound images [3] are often strongly corrupted by the multiplicative noise. Therefore, it is very important for removing the multiplicative noise. In this paper, we focus on the issue of multiplicative noise removal as follows:where is an observed image and is the original image. is the multiplicative noise which follows a Gamma Law with mean one and its probability density function is given bywhere is the number of looks and is a Gamma function.

There are many research works in the field of denoising. For example, Rosa-Zurera et al. [1] and Sveinsson and Benediktsson [2] used wavelet method to reduce the speckle noise. Ramamoorthy et al. [3] gave an efficient method for speckle reduction about Ultrasound liver images. To remove the multiplicative noise, Aubet and Aujol [4] used maximum a posteriori estimator to establish a nonconvex variational model (AA model) as follows:where is the total variation (TV) regularization term and is a regularization parameter.

Because the AA model is nonconvex, it does not have the global minimal solution. Later, Shi and Osher [5] used the logarithm transformation and derived a globally convex model (SO) where , the second term is the data fidelity term, and is a regularization parameter. Using a relaxed inverse scale space flow method, the authors obtained an excellent denoising effect although it took a long time. To improve the speed of operation, a new variational model was introduced by Huang et al. [6] through the variable splitting. The corresponding minimization problem was given byEquation (5) was solved by using an alternating minimization method, and two corresponding subproblems could be solved by Newton’s method and dual method [7], respectively. Alternative iterative algorithm ensures that the solution of the model is unique, and the iterative sequence also converges to the optimal solution of it. Similarly to the SO model, Bioucas and Figueiredo [8] proposed a new speckle reduction scheme by combining operator splitting with augmented Lagrangian method. Taking the I-divergence as the data-fitting term and the TV regularizer, Steidl and Teuber [9] introduced a variational restoration model. Inspired by the connection of the augmented Lagrangian algorithm and the primal-dual hybrid gradient algorithm, Chen et al. [10] developed an improved primal-dual algorithm for multiplicative noise removal. Durand et al. [11] proposed a hybrid method composed of data-fitting for the curvelet frame coefficients and the total variation. Combining the weighted TV with the data term in (4), a nonconvex sparse regularization variational model was proposed [12]. Using the constrained TV norm, Hao et al. [13] put forward a dual method and its acceleration. For other methods for multiplicative noise removal refer to [1422].

However, it is well known that the TV model suffers from the so-called staircase effect. In order to reduce the staircase effect, some methods were introduced [2325]. Inspired by the methods in [6], we separate the second-order TV regularization term into two low-order regularization terms by using variable splitting. To solve the proposed model, we design a linear alternation direction method to find the minimizer of such an objective function. Our experimental results show that the proposed method has some good performances for multiplicative noise removal.

The outline of this paper is as follows. In Section 2, we introduce a hybrid variational model and develop an alternative direction algorithm. In Section 3, we give the convergence analysis of the proposed algorithm. In Section 4, we do some numerical experiments to demonstrate the proposed algorithm. Finally, the conclusions are given in Section 5.

2. The Proposed Model and Algorithm

2.1. The Proposed Model

In [23], the authors proposed one high order variational model, which successfully alleviated the staircase effect for the additive Gaussian noise. Inspired by the splitting idea [6], we introduce an auxiliary variable in the regularization term [23] and divide the second-order derivative term into two low-order terms. The aim is that it not only can lower the order of image but also can alleviate the staircase effect. Meanwhile the proposed model contains the TV term which can preserve the edges. That is to say, we propose a hybrid variational model based on the variable splitting for multiplicative noise removal as follows:where and is a vector field of the image . , , and are the regularization parameters.

The proposed model has the following advantages. Firstly, the proposed model is strictly convex and it has a unique minimal solution, which is different from the models [4, 12, 15]. Secondly, when tends to be infinite, tends to , and the first two terms turn into the second-order TV, which has been proposed by the authors in [23]. In their work, it has been declared that the second-order TV can remove the additive noise and alleviate the staircase effect better than TV in the smoothing regions. So the proposed model can remove the large noise and preserve the structures of the restored image better through the parameters and . Thirdly, [20] alsostudied the high order variational model for multiplicative noise removal; however it requires the restored images to belong to the more complex function space (refers to [20]).

2.2. The Proposed Algorithm

To solve the proposed model (6), we use the following alternating direction method. The proposed model can be written into the following two minimization subproblems:

From (7) and (8), it is very interesting to see that the iteration method is similar to the ideas in [24] which contain the following two steps: one is to obtain a smooth vector field, and the other is to recover the image using the smoothed vector field of the first step. In their work, because of using two low-order variational models to restore image, the second-step method [24] can preserve the edges and details better than [23]; meanwhile, it has been proved to reduce the staircase effect well. The difference between [24] and the proposed iterative method is that the vector field and the reconstruction image in the proposed iteration algorithm are intertwined, while theirs are disjoined, so the proposed iterative method can remove noise and preserve the edges better.

For (7), the vector fields can be solved by Chambolle’s dual method [7], which is given by where is the divergence of the matrix vector and , , , , , are the first-order forward difference and backward difference, respectively. , can be solved by the fixed point iteration: let and , and we iteratewhere is the time step and the two superscripts and are used to denote the outer-loop and inner-loop.

For (8), we use the split Bregman method [2630] to solve it, which is often used to solve the optimization problem with L1 regularization term. To apply the split Bregman method to (8), we introduce an auxiliary variable , let , and add a quadratic penalty function, and those lead to the following unconstrained problem:where is a penalty parameter. The split Bregman algorithm is as follows:where denotes the thresholding operator defined by

For the first equation in (12), we can see that it has no closed-form solution. To solve it easily, inspired by the linear idea in [31], we expand it into the second-order Taylor formula at and replace the Hessian matrix of it with , is a constant, and then the first equation in (12) can be simplified as follows:We can easily obtain the following closed solution:

In the end, the complete algorithm for solving the proposed model (6) can be summarized in the following steps.

Algorithm 1. The linear alternating direction method for solving the proposed model (6)
Initialization. , and .
Step  1. Compute by (9).
Step  2. Compute by (15).
Step  3. Compute by the second formula of (12).
Step  4. Compute by the third formula of (12).
Step  5. Until the stop condition is satisfied, .

3. Convergence Analysis of the Proposed Algorithm

From the iterative schemes of the proposed model, we can obtain the following results.

Theorem 2. Equation (6) is a strictly convex model, and it has a unique solution.

Proof. For Equation (6), we can easily see that the first and third terms are convex. So we only need to prove that the second term and the last term are strictly convex. We letWe firstly assume that , and we easily get , so is strictly convex for , and, for any , , and , we have Since, for any , , and , we getfrom (17) and (18), we haveThat is, the proposed model Equation (6) is strictly convex and so it has a unique solution.

Theorem 3. Let be given by (7) and (8), and we have and .

Proof. Setand thenSoBecause is convex, we have . By the two-order Taylor expansion, we obtainSo we have In addition, , is the minimal solution of , and , so we get Since is strictly convex and coercive, we can use a matrix to replace the Hessian matrix of and obtainSumming the above inequality from to , we have . Similarly, we have .

4. Experimental Results

In this section, numerical results are presented to demonstrate the performance of the proposed algorithm. All simulations are performed in MATLAB 7.8 on an Intel Core 3.10-Ghz PC. The restoration results are compared with those obtained by the AA model [4], Huang et al. model [6], and Han et al. model [12], which are applied to the same noisy versions. To simplify the parameter-tuning process, we normalize the size of all images to be , and we also normalize the range of gray intensity of each image to be so that the logarithm can be calculated on the image. In the tests, each pixel of an original image is degraded by the Gamma noise with mean one, and the noise level is controlled by the value of in (2), we test all of the following images with three noise levels , and Peak Signal-to-Noise Ratio (PSNR) is used to measure the quality of the restored image, which is defined as follows:where and denote the restored image and the noise-free image, respectively, and is the size of image.

For fair comparison, some parameters are recommended by authors, and some parameters are adjusted so that every compared method can get the best results. The stop condition is that all algorithms can obtain their best value of PSNR or the max iterative number is 300. The parameter and the discrete time step in AA are chosen , . The parameters in Huang et al. [6] and Han et al. [12] are recommended respectively. In the proposed algorithm, we firstly fix , and then search other parameters. We find that when the parameters , , , the proposed model can obtain better results for the following test images. From (6), we can see that when is larger and larger, the vector tends to the gradient of log image, then the first two terms turn to the second-order term which can remove the large noise well, and, similarly, when is very big, the gradient of the log image turns to be zero, which denotes that the restored image will be very smooth, so we take and tune another two parameters according to different images and different noise levels. In general, when the noise level is large, we take the bigger values.

The PSNR values and the running time of different algorithms are listed in Table 1. The best PSNR values are shown in bold. By inspection of Table 1, we can find that the proposed algorithm achieves the highest value of PSNR for most denoising results. Even for an unsuccessful case, our algorithm yields the comparable PSNR comparing with the best values obtained from Han et al. [12]. About the running time, we can observe that different algorithm needs different time to restore the different images; that is to say, it is hard for us to judge which algorithm is fast or slow. For example, although AA model uses the gradient descent method, the computation speed is very fast due to the bigger time step. Reference [6] utilizes the fast algorithm to solve its model, but it took a longer time than other algorithms, and the reason for this phenomenon may be related to the recommended parameters which can obtain the better PSNR.

In order to evaluate the performances of different algorithms, we compute the respective average values for three noise levels in Table 2. From Table 2, we can observe that PSNR values for the proposed algorithm averagely exceed about 0.3 db over Han et al. [12], 0.4 db over Huang’s model [6], and 1.3 db over the AA model [4]; the average time our proposed algorithm spends is less than the other three methods when noise levels are 3 and 9; when noise level is 5, the running time of the proposed algorithm is almost the same as AA. Consequently, we believe that the proposed model can averagely perform better than the other three models.

To make a visual comparison of the restoration images, we also give the restored results of three test images with three different noise levels in Figure 1. Figure 1(a) is the clean test image. They are Boat, House, and Nimes, respectively. Figure 1(b) is the corresponding noise image. From the left to the right, the noise level is indicated by = 3, 5, and 9, respectively. The respective denoising results are in Figure 2. Figure 2(a) is the denoising result using the AA method [4]. Figure 2(b) is the denoising result using Huang’s method [6]. Figure 2(c) is the restored result using Han’s method [12]. Figure 2(d) is the result using our method. From Figure 2, we can see that the proposed model obtains the better visual resolution than the other three methods.

To illustrate the advantage of TV in (6), we take Lena and House image with noise level as examples. When (6) does not contain TV term, we take , and then other parameters are changed in so that we can obtain the best results. The experiment results are in Figure 3. Figures 3(a) and 3(b) are the results for . Figures 3(c) and 3(d) are the results for . Figures 3(e) and 3(f) are the PSNR cures for Lena and House images. From Figures 3(e) and 3(f), we can see that the proposed algorithm can obtain the better results when .

Finally, we take House image as an example to compare the new method with the recent model [15] for noise level . Reference [15] adopts the relaxed alternation direction method and primal-dual method. The PSNR and error cures are shown in Figure 4. From Figure 4, we can see that the two methods can obtain the almost same PSNR with the iteration times increasing; however the new method is faster than [15] to achieve the best PSNR. In addition, the new method is strictly convex and the convergence proof is given, but [15] does not give the convergence proof.

5. Conclusions

In this paper, we have studied a hybrid variational model for multiplicative noise removal problem. The proposed model is strictly convex and has a unique solution. To solve the model, a linear alternating minimization algorithm is employed, which turns the original problem into two subproblems. For the two subproblems, we adopt the dual method and split Bregman method to solve them, respectively. By the second-order Taylor formula, we get the closed solution. In addition, we also give the convergence analysis for the proposed algorithm. Experimental results have shown that the proposed method is more effective than the other three methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Science Foundation of China (nos. U1504603, 61301229, and 61401383) and the key scientific research project of Colleges and Universities in Henan province (no. 15A110020).