Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 9370984, 10 pages
https://doi.org/10.1155/2017/9370984
Research Article

Multiplicative Noise Removal Based on the Linear Alternating Direction Method for a Hybrid Variational Model

1School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
2School of Information Engineering, Shandong Youth University of Political Science, Jinan 250103, China
3Institute of Graphics and Image Processing, Xianyang Normal University, Xianyang 712000, China

Correspondence should be addressed to Jianlou Xu; moc.621@uolnaijux

Received 6 January 2017; Accepted 7 March 2017; Published 15 May 2017

Academic Editor: Pasquale Memmolo

Copyright © 2017 Yan Hao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

To preserve the edge, multiplicative noise removal models based on the total variation regularization have been widely studied, but they suffer from the staircase effect. In this paper, to preserve the edge and reduce the staircase effect, we develop a hybrid variational model based on the variable splitting method for multiplicative noise removal; the new model is a strictly convex objective function which contains the total variation regularization and a modified regularization term. We use the linear alternative direction method to find the minimal solution and also give the convergence proof of the proposed algorithm. Experimental results verify that the proposed model can obtain the better results for removing the multiplicative noise compared with the recent method.

1. Introduction

Image denoising is one of the fundamental problems in image processing and computer vision fields. A real recorded image may be distorted by some unexpected random noise; for example, Synthetic Aperture Radar (SAR) [1, 2] and Ultrasound images [3] are often strongly corrupted by the multiplicative noise. Therefore, it is very important for removing the multiplicative noise. In this paper, we focus on the issue of multiplicative noise removal as follows:where is an observed image and is the original image. is the multiplicative noise which follows a Gamma Law with mean one and its probability density function is given bywhere is the number of looks and is a Gamma function.

There are many research works in the field of denoising. For example, Rosa-Zurera et al. [1] and Sveinsson and Benediktsson [2] used wavelet method to reduce the speckle noise. Ramamoorthy et al. [3] gave an efficient method for speckle reduction about Ultrasound liver images. To remove the multiplicative noise, Aubet and Aujol [4] used maximum a posteriori estimator to establish a nonconvex variational model (AA model) as follows:where is the total variation (TV) regularization term and is a regularization parameter.

Because the AA model is nonconvex, it does not have the global minimal solution. Later, Shi and Osher [5] used the logarithm transformation and derived a globally convex model (SO) where , the second term is the data fidelity term, and is a regularization parameter. Using a relaxed inverse scale space flow method, the authors obtained an excellent denoising effect although it took a long time. To improve the speed of operation, a new variational model was introduced by Huang et al. [6] through the variable splitting. The corresponding minimization problem was given byEquation (5) was solved by using an alternating minimization method, and two corresponding subproblems could be solved by Newton’s method and dual method [7], respectively. Alternative iterative algorithm ensures that the solution of the model is unique, and the iterative sequence also converges to the optimal solution of it. Similarly to the SO model, Bioucas and Figueiredo [8] proposed a new speckle reduction scheme by combining operator splitting with augmented Lagrangian method. Taking the I-divergence as the data-fitting term and the TV regularizer, Steidl and Teuber [9] introduced a variational restoration model. Inspired by the connection of the augmented Lagrangian algorithm and the primal-dual hybrid gradient algorithm, Chen et al. [10] developed an improved primal-dual algorithm for multiplicative noise removal. Durand et al. [11] proposed a hybrid method composed of data-fitting for the curvelet frame coefficients and the total variation. Combining the weighted TV with the data term in (4), a nonconvex sparse regularization variational model was proposed [12]. Using the constrained TV norm, Hao et al. [13] put forward a dual method and its acceleration. For other methods for multiplicative noise removal refer to [1422].

However, it is well known that the TV model suffers from the so-called staircase effect. In order to reduce the staircase effect, some methods were introduced [2325]. Inspired by the methods in [6], we separate the second-order TV regularization term into two low-order regularization terms by using variable splitting. To solve the proposed model, we design a linear alternation direction method to find the minimizer of such an objective function. Our experimental results show that the proposed method has some good performances for multiplicative noise removal.

The outline of this paper is as follows. In Section 2, we introduce a hybrid variational model and develop an alternative direction algorithm. In Section 3, we give the convergence analysis of the proposed algorithm. In Section 4, we do some numerical experiments to demonstrate the proposed algorithm. Finally, the conclusions are given in Section 5.

2. The Proposed Model and Algorithm

2.1. The Proposed Model

In [23], the authors proposed one high order variational model, which successfully alleviated the staircase effect for the additive Gaussian noise. Inspired by the splitting idea [6], we introduce an auxiliary variable in the regularization term [23] and divide the second-order derivative term into two low-order terms. The aim is that it not only can lower the order of image but also can alleviate the staircase effect. Meanwhile the proposed model contains the TV term which can preserve the edges. That is to say, we propose a hybrid variational model based on the variable splitting for multiplicative noise removal as follows:where and is a vector field of the image . , , and are the regularization parameters.

The proposed model has the following advantages. Firstly, the proposed model is strictly convex and it has a unique minimal solution, which is different from the models [4, 12, 15]. Secondly, when tends to be infinite, tends to , and the first two terms turn into the second-order TV, which has been proposed by the authors in [23]. In their work, it has been declared that the second-order TV can remove the additive noise and alleviate the staircase effect better than TV in the smoothing regions. So the proposed model can remove the large noise and preserve the structures of the restored image better through the parameters and . Thirdly, [20] alsostudied the high order variational model for multiplicative noise removal; however it requires the restored images to belong to the more complex function space (refers to [20]).

2.2. The Proposed Algorithm

To solve the proposed model (6), we use the following alternating direction method. The proposed model can be written into the following two minimization subproblems:

From (7) and (8), it is very interesting to see that the iteration method is similar to the ideas in [24] which contain the following two steps: one is to obtain a smooth vector field, and the other is to recover the image using the smoothed vector field of the first step. In their work, because of using two low-order variational models to restore image, the second-step method [24] can preserve the edges and details better than [23]; meanwhile, it has been proved to reduce the staircase effect well. The difference between [24] and the proposed iterative method is that the vector field and the reconstruction image in the proposed iteration algorithm are intertwined, while theirs are disjoined, so the proposed iterative method can remove noise and preserve the edges better.

For (7), the vector fields can be solved by Chambolle’s dual method [7], which is given by where is the divergence of the matrix vector and , , , , , are the first-order forward difference and backward difference, respectively. , can be solved by the fixed point iteration: let and , and we iteratewhere is the time step and the two superscripts and are used to denote the outer-loop and inner-loop.

For (8), we use the split Bregman method [2630] to solve it, which is often used to solve the optimization problem with L1 regularization term. To apply the split Bregman method to (8), we introduce an auxiliary variable , let , and add a quadratic penalty function, and those lead to the following unconstrained problem:where is a penalty parameter. The split Bregman algorithm is as follows:where denotes the thresholding operator defined by

For the first equation in (12), we can see that it has no closed-form solution. To solve it easily, inspired by the linear idea in [31], we expand it into the second-order Taylor formula at and replace the Hessian matrix of it with , is a constant, and then the first equation in (12) can be simplified as follows:We can easily obtain the following closed solution:

In the end, the complete algorithm for solving the proposed model (6) can be summarized in the following steps.

Algorithm 1. The linear alternating direction method for solving the proposed model (6)
Initialization. , and .
Step  1. Compute by (9).
Step  2. Compute by (15).
Step  3. Compute by the second formula of (12).
Step  4. Compute by the third formula of (12).
Step  5. Until the stop condition is satisfied, .

3. Convergence Analysis of the Proposed Algorithm

From the iterative schemes of the proposed model, we can obtain the following results.

Theorem 2. Equation (6) is a strictly convex model, and it has a unique solution.

Proof. For Equation (6), we can easily see that the first and third terms are convex. So we only need to prove that the second term and the last term are strictly convex. We letWe firstly assume that , and we easily get , so is strictly convex for , and, for any , , and , we have Since, for any , , and , we getfrom (17) and (18), we haveThat is, the proposed model Equation (6) is strictly convex and so it has a unique solution.

Theorem 3. Let be given by (7) and (8), and we have and .

Proof. Setand thenSoBecause is convex, we have . By the two-order Taylor expansion, we obtainSo we have In addition, , is the minimal solution of , and , so we get Since is strictly convex and coercive, we can use a matrix to replace the Hessian matrix of and obtainSumming the above inequality from to , we have . Similarly, we have .

4. Experimental Results

In this section, numerical results are presented to demonstrate the performance of the proposed algorithm. All simulations are performed in MATLAB 7.8 on an Intel Core 3.10-Ghz PC. The restoration results are compared with those obtained by the AA model [4], Huang et al. model [6], and Han et al. model [12], which are applied to the same noisy versions. To simplify the parameter-tuning process, we normalize the size of all images to be , and we also normalize the range of gray intensity of each image to be so that the logarithm can be calculated on the image. In the tests, each pixel of an original image is degraded by the Gamma noise with mean one, and the noise level is controlled by the value of in (2), we test all of the following images with three noise levels , and Peak Signal-to-Noise Ratio (PSNR) is used to measure the quality of the restored image, which is defined as follows:where and denote the restored image and the noise-free image, respectively, and is the size of image.

For fair comparison, some parameters are recommended by authors, and some parameters are adjusted so that every compared method can get the best results. The stop condition is that all algorithms can obtain their best value of PSNR or the max iterative number is 300. The parameter and the discrete time step in AA are chosen , . The parameters in Huang et al. [6] and Han et al. [12] are recommended respectively. In the proposed algorithm, we firstly fix , and then search other parameters. We find that when the parameters , , , the proposed model can obtain better results for the following test images. From (6), we can see that when is larger and larger, the vector tends to the gradient of log image, then the first two terms turn to the second-order term which can remove the large noise well, and, similarly, when is very big, the gradient of the log image turns to be zero, which denotes that the restored image will be very smooth, so we take and tune another two parameters according to different images and different noise levels. In general, when the noise level is large, we take the bigger values.

The PSNR values and the running time of different algorithms are listed in Table 1. The best PSNR values are shown in bold. By inspection of Table 1, we can find that the proposed algorithm achieves the highest value of PSNR for most denoising results. Even for an unsuccessful case, our algorithm yields the comparable PSNR comparing with the best values obtained from Han et al. [12]. About the running time, we can observe that different algorithm needs different time to restore the different images; that is to say, it is hard for us to judge which algorithm is fast or slow. For example, although AA model uses the gradient descent method, the computation speed is very fast due to the bigger time step. Reference [6] utilizes the fast algorithm to solve its model, but it took a longer time than other algorithms, and the reason for this phenomenon may be related to the recommended parameters which can obtain the better PSNR.

Table 1: The denoising results for different algorithms corresponding to three noise levels.

In order to evaluate the performances of different algorithms, we compute the respective average values for three noise levels in Table 2. From Table 2, we can observe that PSNR values for the proposed algorithm averagely exceed about 0.3 db over Han et al. [12], 0.4 db over Huang’s model [6], and 1.3 db over the AA model [4]; the average time our proposed algorithm spends is less than the other three methods when noise levels are 3 and 9; when noise level is 5, the running time of the proposed algorithm is almost the same as AA. Consequently, we believe that the proposed model can averagely perform better than the other three models.

Table 2: The respective average value for the above different noise levels.

To make a visual comparison of the restoration images, we also give the restored results of three test images with three different noise levels in Figure 1. Figure 1(a) is the clean test image. They are Boat, House, and Nimes, respectively. Figure 1(b) is the corresponding noise image. From the left to the right, the noise level is indicated by = 3, 5, and 9, respectively. The respective denoising results are in Figure 2. Figure 2(a) is the denoising result using the AA method [4]. Figure 2(b) is the denoising result using Huang’s method [6]. Figure 2(c) is the restored result using Han’s method [12]. Figure 2(d) is the result using our method. From Figure 2, we can see that the proposed model obtains the better visual resolution than the other three methods.

Figure 1: Three clean and three noisy images. (a) Clean images; (b) noisy images. From the left to the right, the noise level is indicated by = 3, 5, and 9, respectively.
Figure 2: Denoising results of different methods. From (a) to (d), the results are obtained by the AA model [4] and the Huang et al. [6] and Han et al. [12] and the proposed model, respectively.

To illustrate the advantage of TV in (6), we take Lena and House image with noise level as examples. When (6) does not contain TV term, we take , and then other parameters are changed in so that we can obtain the best results. The experiment results are in Figure 3. Figures 3(a) and 3(b) are the results for . Figures 3(c) and 3(d) are the results for . Figures 3(e) and 3(f) are the PSNR cures for Lena and House images. From Figures 3(e) and 3(f), we can see that the proposed algorithm can obtain the better results when .

Figure 3: The experiment results for or not. (a) and (b) are the results for ; (c) and (d) are the results for ; (e) is the evolution curves of PSNR with the iteration times for Lena image; (f) is the evolution curves of PSNR with the iteration times for House image.

Finally, we take House image as an example to compare the new method with the recent model [15] for noise level . Reference [15] adopts the relaxed alternation direction method and primal-dual method. The PSNR and error cures are shown in Figure 4. From Figure 4, we can see that the two methods can obtain the almost same PSNR with the iteration times increasing; however the new method is faster than [15] to achieve the best PSNR. In addition, the new method is strictly convex and the convergence proof is given, but [15] does not give the convergence proof.

Figure 4: The curves for PSNR and error. (a) The evolution curves of PSNR with the iteration times; (b) the evolution curves of error with the iteration times.

5. Conclusions

In this paper, we have studied a hybrid variational model for multiplicative noise removal problem. The proposed model is strictly convex and has a unique solution. To solve the model, a linear alternating minimization algorithm is employed, which turns the original problem into two subproblems. For the two subproblems, we adopt the dual method and split Bregman method to solve them, respectively. By the second-order Taylor formula, we get the closed solution. In addition, we also give the convergence analysis for the proposed algorithm. Experimental results have shown that the proposed method is more effective than the other three methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Science Foundation of China (nos. U1504603, 61301229, and 61401383) and the key scientific research project of Colleges and Universities in Henan province (no. 15A110020).

References

  1. M. Rosa-Zurera, A. M. Cobreces-Alvarez, J. C. Nieto-Borge, M. P. Jarabo-Amores, and D. Mata-Moya, “Wavelet denoising with edge detection for speckle reduction in SAR images,” in Proceedings of the 15th European Signal Processing Conference, pp. 1098–1102, 2007.
  2. J. Sveinsson and J. Benediktsson, “Speckle reduction and enhancement of SAR images in the wavelet domain,” in Proceedings of the International Geoscience and Remote Sensing Symposium, vol. 1, pp. 63–66, 1996.
  3. S. Ramamoorthy, R. Subramanian, and D. Gandhi, “An efficient method for speckle reduction in Ultrasound liver images for e-Health applications,” in Lecture Notes in Comput. Science, vol. 8337, pp. 311–321, 2014. View at Publisher · View at Google Scholar
  4. G. Aubert and J.-F. Aujol, “A variational approach to removing multiplicative noise,” SIAM Journal on Applied Mathematics, vol. 68, no. 4, pp. 925–946, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  5. J. Shi and S. Osher, “A nonlinear inverse scale space method for a convex multiplicative noise model,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 294–321, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  6. Y. Huang, M. Ng, and Y. Wen, “A new total variation method for multiplicative noise removal,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 20–40, 2009. View at Google Scholar
  7. A. Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 89–97, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  8. J. M. Bioucas-Dias and M. A. Figueiredo, “Multiplicative noise removal using variable splitting and constrained optimization,” IEEE Transactions on Image Processing, vol. 19, no. 7, pp. 1720–1730, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  9. G. Steidl and T. Teuber, “Removing multiplicative noise by Douglas-Rachford splitting methods,” Journal of Mathematical Imaging and Vision, vol. 36, no. 2, pp. 168–184, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  10. D.-Q. Chen, X.-P. Du, and Y. Zhou, “Primal-dual algorithm based on Gauss-Seidel scheme with application to multiplicative noise removal,” Journal of Computational and Applied Mathematics, vol. 292, pp. 609–622, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  11. S. Durand, J. Fadili, and M. Nikolova, “Multiplicative noise removal using l1 fidelity on frame coefficients,” Journal of Mathematical Imaging and Vision, vol. 36, no. 3, pp. 201–226, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. Y. Han, X.-C. Feng, G. Baciu, and W.-W. Wang, “Nonconvex sparse regularizer based speckle noise removal,” Pattern Recognition, vol. 46, no. 3, pp. 989–1001, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Hao and J. Xu, “An effective dual method for multiplicative noise removal,” Journal of Visual Communication and Image Representation, vol. 25, no. 2, pp. 306–312, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. W. Feng, H. Lei, and Y. Gao, “Speckle reduction via higher order total variation approach,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1831–1843, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  15. Y. Hao, J. Xu, S. Li, and X. Zhang, “A variational model based on split Bregman method for multiplicative noise removal,” AEU—International Journal of Electronics and Communications, vol. 69, no. 9, pp. 1291–1296, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. H. Woo and S. Yun, “Alternating minimization algorithm for speckle reduction with a shifting technique,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1701–1714, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  17. D.-Q. Chen and L.-Z. Cheng, “Fast linearized alternating direction minimization algorithm with adaptive parameter selection for multiplicative noise removal,” Journal of Computational and Applied Mathematics, vol. 257, pp. 29–45, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  18. D.-Q. Chen and Y. Zhou, “Multiplicative denoising based on linearized alternating direction method using discrepancy function constraint,” Journal of Scientific Computing, vol. 60, no. 3, pp. 483–504, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  19. H. Ren, L. Qin, and X. Zhu, “Speckle reduction and cartoon-texture decomposition of ophthalmic optical coherence tomography images by variational image decomposition,” Optik, vol. 127, no. 19, pp. 7809–7821, 2016. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Wu and X. Feng, “Speckle noise reduction via nonconvex high total variation approach,” Mathematical Problems in Engineering, vol. 2015, 11 pages, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  21. V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications, vol. 5, Article ID e16142, 2016. View at Publisher · View at Google Scholar
  22. V. Bianco, P. Memmolo, M. Paturzo, and P. Ferraro, “On-speckle suppression in IR digital holography,” Optics Letters, vol. 41, pp. 5226–5229, 2016. View at Google Scholar
  23. M. Lysaker, A. Lundervold, and X. C. Tai, “Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1579–1589, 2003. View at Publisher · View at Google Scholar
  24. M. Lysaker, S. Osher, and X.-C. Tai, “Noise removal using smoothed normals and surface fitting,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1345–1357, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  25. J. Xu, Y. Hao, and H. Song, “A modified LOT model for image denoising,” Multimedia Tools and Applications, vol. 76, no. 6, pp. 8131–8144, 2017. View at Publisher · View at Google Scholar
  26. T. Goldstein and S. Osher, “The split Bregman method for L1 regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  27. C. Wu and X.-C. Tai, “Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 300–339, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. C. Chen and G. Xu, “A new linearized split bregman iterative algorithm for image reconstruction in sparse-view X-ray computed tomography,” Computers and Mathematics with Applications, vol. 71, pp. 1537–1559, 2016. View at Google Scholar
  29. J. Xu, X. Feng, and Y. Hao, “A coupled variational model for image denoising using a duality strategy and split Bregman,” Multidimensional Systems and Signal Processing, vol. 25, no. 1, pp. 83–94, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Setzer, G. Steidl, and T. Teuber, “Deblurring Poissonian images by split Bregman techniques,” Journal of Visual Communication and Image Representation, vol. 21, no. 3, pp. 193–199, 2010. View at Publisher · View at Google Scholar · View at Scopus
  31. H. Woo and S. Yun, “Proximal linearized alternating direction method for multiplicative denoising,” SIAM Journal on Scientific Computing, vol. 35, no. 2, pp. B336–B358, 2013. View at Publisher · View at Google Scholar · View at MathSciNet