- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2013 (2013), Article ID 834035, 13 pages

http://dx.doi.org/10.1155/2013/834035

## A Fast High-Order Total Variation Minimization Method for Multiplicative Noise Removal

^{1}School of Science, Huaihai Institute of Technology, Lianyungang, Jiangsu 222005, China^{2}School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054, China

Received 17 October 2012; Revised 10 March 2013; Accepted 13 March 2013

Academic Editor: Fatih Yaman

Copyright © 2013 Xiao-Guang Lv et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Multiplicative noise removal problem has received considerable attention in recent years. The total variation regularization method for the solution of the noise removal problem can preserve edges well but has the sometimes undesirable staircase effect. In this paper, we propose a fast high-order total variation minimization method to restore multiplicative noisy images. The proposed method is able to preserve edges and at the same time avoid the staircase effect in the smooth regions. An alternating minimization algorithm is employed to solve the proposed high-order total variation minimization problem. We discuss the convergence of the alternating minimization algorithm. Some numerical results show that the proposed method gives restored images of higher quality than some existing multiplicative noise removal methods.

#### 1. Introduction

Image denoising problem has been widely studied in the areas of image processing. The problem includes additive noise removal and multiplicative noise removal. Most of the literature deals with the additive noise model [1–4]. Given a noisy image , where is the original image and is the noise, the denoising problem is to recover from the observed image . The additive noise model has been extensively studied. We refer the reader to [5–11] and references therein for a review of various methods.

In this paper, we consider the problem of seeking the original image from a multiplicative noisy image. In the multiplicative noise model, the recorded image is the multiplication of the original image and the noise : This problem arises in ultrasound imaging, synthetic aperture radar (SAR) and sonar (SAS), laser imaging, and magnetic field inhomogeneity in MRI [12, 13]. In this work, we concentrate on the assumption that follows a Gamma distribution, which commonly occurs in SAR. With respect to a given resolution cell of the imaging device, the complex SAR image is computed after the SAR system receives the coherent sum of reflected monochromatic microwaves. The complex SAR image at one point is usually modeled by a complex zero-mean circular Gaussian density (i.e., the real and imaginary parts are independent Gaussian variable with a common variance). The intensity image is defined as the square of the magnitude of the corresponding complex SAR image; its variables follow exponentially distribution independently [14, 15]. Since the complex observation is acquired in highly coherent system, the intensity image occurs to have peculiar granular appearance (known as speckle). The conventional SAR system reduces the speckle effect of the intensity image by the multilooking process (-look, in the case of -looks), that is, averaging the observed intensity images from slightly different angles of the same resolution cell [16].

Due to the coherent nature of these image acquisition processes, nearly all the information of the original image may vanish when it is distorted by the multiplicative noise. In general, the standard additive denoising method, so prevalent in image processing, is inadequate. Therefore, it is necessary to devise efficient and reliable algorithms for recovering the true image from the observed multiplicative noisy image. Until the past decade, a few variational approaches devoted to multiplicative noise removal have been proposed. The early variational approach for multiplicative noise removal is the one by Rudin et al. [17]. According to the statistical properties of the multiplicative noise , the recovery of the image was based on solving the following constrained optimization problem [17]: where is denoted by the seminorm on boundary variation (BV) space that coincides with when is smooth. The two constraints state that the mean of the noise is equal to 1, and the variance is equal to . In (2), only basic statistical properties, the mean, and the variance of the noise are considered, which somehow limits the restored results.

By using a maximum a posteriori (MAP) estimator, Aubert and Aujol [18] proposed a function whose minimizer corresponds to the denoised image to be recovered. This function is where the total variation of is utilized as the regularization term, and is the regularization parameter which controls the trade-off between a good fit of and a smoothness requirement due to the total variation regularization. Although the functional they proposed is not convex, they still proved the existence of a minimizer and showed the capability of their model on some numerical examples.

As a result of the drawback of the objective function (3) that is not convex for all , the obtained solution is likely not the global optimal solution of (3). To overcome this problem, Huang et al. [19] proposed and studied a strictly convex objective function for multiplicative noise removal problem. In [19], the authors introduced an auxiliary variable in (3). With the new variable, the proposed unconstrained total variation denoising problem is described as follows: where and are positive regularization parameters. The parameter measures the trade-off between an image obtained by a maximum likelihood estimation from the first term and a total variation denoised image . The parameter measures the amount of regularization to a denoised image . The main advantage of the proposed method is that the total variation norm can be used in the noise removal process in an efficient manner. Therefore the method proposed in [19] has the ability to preserve edges very well in the denoised image.

Recently, Bioucas-Dias and Figueiredo [20] proposed an efficient multiplicative noise removal method by using variable splitting and constrained optimization. After converting the multiplicative noise model into an additive one by taking logarithms, they used variable splitting to obtain an equivalent constrained problem and then solved this optimization problem by using the augmented Lagrangian method. A set of experiments has shown that the proposed method, which they named MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance. In [21], Steidl and Teuber considered a variational restoration model consisting of the -divergence as data fitting term and the total variation seminorm or nonlocal means as regularizer for removing multiplicative noise. They proposed to compute the minimizers of their restoration functionals by using Douglas-Rachford splitting techniques. For a particular splitting, they presented a semi-implicit scheme to solve the involved nonlinear systems of equations and proved its -linear convergence.

It is known that the total variation denoising method is a PDE-based technique that preserves edges well but has the sometimes undesirable staircase effect, namely, the transformation of smooth regions (ramps) into piecewise constant regions (stairs) [22–26]. Therefore, the approaches involving the traditional total variation often cause staircase effect since they favor solutions that are piecewise constant. Staircase solutions fail to satisfy the evaluation of visual quality and they can develop false edges that do not exist in the true image. To attenuate the staircase effect, there is a growing interest in the literature for replacing the total variation norm by the high-order total variation norm. The motivation behind such attempt is to restore potentially a wider of images, which comprise more than merely piecewise constant regions. The majority of the high-order norms involve second-order differential operators because piecewise-vanishing second-order derivatives lead to piecewise-linear solutions that better fit smooth intensity changes; see [27] for more details. There are two classes of second-order regularization methods for image restoration problems. The first class employs a second-order regularizer in a standalone way. For example, in [28], the authors considered a fourth-order partial differential equations (PDE) model for noise removal and employed the dual algorithm of Chambolle for solving the high-order problems. In [29], Hu and Jacob applied higher degree total variation (HDTV) regularization for image recovery. The second class is to combine the TV norm with a second-order regularizer. For example, a technique in [30, 31] combining the TV filter with a fourth-order PDE filter was proposed to preserve edges and at the same time avoid the staircase effect in smooth regions for noise removal. In [32], Papafitsoros and Schönlieb considered a high-order model involving convex functions of the TV and the TV of the first derivatives for image restoration problems and used the split Bregman method to solve numerically the corresponding discretized problem.

In this paper, we present a fast high-order total variation minimization method for multiplicative noise removal problem. This technique substantially reduces the staircase effect, while preserving sharp jump discontinuities (edges). An alternating minimization algorithm is employed to solve the proposed high-order total variation minimization model. We discuss the convergence of the proposed alternating minimization algorithm. Some numerical results show that the proposed method gives restored images of higher quality than some existing multiplicative noise removal methods.

The organization of this paper is outlined as follows. In the next section, we present the high-order total variation minimization method for multiplicative noise removal problem. In Section 3, an alternating minimization algorithm is employed to find the minimizer of the proposed minimization problem. We analyze the convergence of the proposed alternating minimization algorithm in Section 4. Some numerical experiments are given to illustrate the performance of the proposed algorithm in Section 5. Concluding remarks are given in the last section.

#### 2. A High-Order Total Variation Multiplicative Denoising Model

In this section, our aim is to propose a strictly convex objective function for restoring images distorted by the multiplicative noise. The most important is that we apply a high-order total variation to the objective function to achieve sharp edges and staircase reduction efficiently. We introduce our multiplicative denoising model from the statistical perspective using the Bayesian formula.

Let , , and denote samples of instances of some random variables , , and . Moreover, we assume that the random variable on each pixel is mutually independent and identically distributed (i.i.d). Moreover, the random variable in each pixel follows Gamma distribution; that is, its probability density function is where is the usual Gamma-function, and and denote the shape scale and inverse parameters in the Gamma distribution, respectively. We note that the mean of a gamma-distributed variable is and the variance is . As the multiplicative noise, in this work we assume that the mean of equals 1, and then we have .

According to the posteriori estimation, the restored image can be determined by From the Bayes rule, we have . It then follows that Using the proposition in [18]: , we obtain Taking the logarithm transformation into account, we assume that the image prior is given by the following: with where and are two positive scalars and is a high-order total variation of . Here we assume that the difference between and follows a Gaussian distribution, and follows a high-order total variation prior. Thus, maximizing amounts to minimizing the log-likelihood: Since is constant, (6) can be rewritten as the following optimization problem: The previous computation leads us to propose the following high-order total variation model for restoring images corrupted by the Gamma noise by introducing a new variable : where and are positive regularization parameters. In the above model, denotes the Hessian of , and is just the Frobenius norm of the Hessian . The parameter measures the trade-off between an image obtained by a maximum likelihood estimation from the first term and a high-order total variation denoised image . The parameter measures the amount of regularization to a denoised image . The main advantage of the proposed method is that the high-order total variation norm can be used in the noise removal process in an efficient manner. Therefore, the proposed method has the ability to preserve edges very well and reduce staircase effect in the denoised image.

Moreover, it is obvious that when includes an edge, also contains an edge at the same point; that is, the logarithm transformation preserves image edges; see [19] for more details. Based on this idea, we can view as an image in the logarithm domain. We can apply a high-order total variation regularization to to recover its edges and reduce staircase effect. It is interesting to note that in the proposed model can be positive, zero, or negative, and the corresponding is still positive, although in the objective function (3) should be positive. Especially, one can find that when in the proposed model is a large negative value, is just close to zero.

#### 3. The Alternating Minimization Algorithm

In this section, an alternating minimization algorithm is employed to solve the proposed high-order total variation minimization problem efficiently. To solve (14), we need to consider the following two minimization subproblems. (i)For fixed, determine the solution of (ii)For fixed, determine the solution of In the discrete setting, minimizing the first problem amounts to solve the following optimization problem: Straightforward computations can show that the minimizer of the above problem is the solution of the following nonlinear systems: There are decoupled nonlinear equations to be solved. The second derivative with respect to of the first term in (17) is equal to , which is always greater than zero. Here we assume that in the multiplicative noise model (1). Therefore, the first term of the objective function is strictly convex for all . Hence, we know that the objective function (17) is also strictly convex. Therefore, the corresponding nonlinear equation (18) has a unique solution and the solution can be determined by using the Newton method very efficiently.

Subproblem (16) is a high-order total variation regularization process for image denoising. Some effective algorithms for the minimizer of the high-order total variation problem have been proposed in the literature. For example, a lot of PDE-based methods are widely used to preserve edges and reduce the staircase effect for the high-order total variation problem; see [6, 22, 23, 30] for more details. In particular, inspired by the work from Chambolle [5], a dual algorithm and its convergence analysis for minimization of the high-order model are presented in [24, 28]. The algorithm overcomes the numerical difficulties related to the nondifferentiability of the high-order model. Some numerical results show that the convergence of the dual algorithm is faster than the gradient descent time-marching algorithm.

In the present paper, we employ the dual algorithm proposed by [24, 28] for the high-order total variation denoising problem (16). The discrete version of problem (16) is represented as In the dual algorithm, we need to solve the following minimization problem: where and with for . This problem can be solved by a semi-implicit fixed point iteration: where , , and is the step size. The operator is defined as . Here, ; see [24, 28] for more details. After the minimizer of the constrained optimization problem in (20) is determined, the denoised image can be computed by . In addition, it is shown that if , then is convergent and with the dual method is the solution of the minimization problem (20) as .

Starting from an initial guess , the alternating minimization method computes a sequence of iterates such that for .

We are now in a position to describe the alternating minimization algorithm for the multiplicative noise removal problem.

*Algorithm 1. *The alternating minimization algorithm for multiplicative noise removal:

Choose an initial guess , and .(1) For fixed, employ the Newton iterative method to compute
(2) For fixed, use a fixed point iteration to compute
and obtain
(3) Check the stopping criteria. If a stopping criterion is satisfied, then exit with an approximate restored image ; otherwise, let and go to step .

#### 4. Convergence Analysis

The main aim of this section is to analyze the convergence of the proposed alternating minimization algorithm. We first remark that in (14) is strictly convex, as the sum of a convex function and of two strictly convex functions; see [33] for more details. Hence, it suffices to show that there exists a unique minimizer of the objective function *.* We denote by the minimum value of . In the following we show that our alternating minimization algorithm gives asymptotically the solution of the discrete problem associated with (14). We have the following theorem.

Theorem 2. *For any initial guess , generated by Algorithm 1 converges to a minimizer of the objective function as . *

*Proof. *It follows from the alternating iterative process in Algorithm 1 that

It is obvious that the sequence is nonincreasing. We note that it is bounded from below by the minimum value . Let , and . Since and are convex, is convex and differentiable and is the associated Bregman distance, we know from [34, 35] that has the five-point property. Hence, thus converges in to . So we have
We note that if the objective function is coercive, the sequence is bounded since the sequence converges.

Now we need to show that the objective function is coercive. Let , , , and be the second order difference matrices in the , , , and direction, respectively, and
It is not difficult to obtain the lower bound of the discrete high-order total variation as follows:
Let and . Denote
If , we obtain . As , it is easy to show that
On the other hand, we obtain
since it is a strictly convex function. By using the above inequality, we have
Hence, if with , it is not difficult to obtain that . Thus, we get that also tends to infinity. It follows from the definition of coercive that is coercive.

Since the sequence is bounded, we can thus extract a convergent subsequence from such that . Then we obtain . Moreover, we have, for all and all ,
and for all and all ,

Let us denote by a cluster point of *.* We may immediately obtain from (27) that
Thus, we have that . We conclude that since and are the minimizers of the strictly convex function defined in (23). Hence, . Following a similar analysis, we can show that . By passing to the limit in (35) and (36), we have
We know that (38) can be rewritten as follows:
From the definition of , (39) is equivalent to the following:
where and . The subdifferential of at is given by
Therefore, we have
which implies that is the minimizer of . Hence the whole sequence converges towards , the unique minimum of *.* We deduce that the sequence converges to , the minimizer of as tends to infinity.

#### 5. Numerical Experiments

In this section we provide numerical results from multiplicative noise removal problem to demonstrate the performance of the proposed regularization method. All computations of the present paper were carried out in Matlab 7.10. The results were obtained by running the Matlab codes on an Intel Core i3 CPU (2.27 GHz, 2.27 GHz) computer with RAM of 2048 M. The initial guess is chosen to be the noisy image in all tests.

The quality of the restoration results with different methods is compared quantitatively by using the Peak-Signal-to-Noise Ratio (PSNR), the relative error (ReErr) and Structural SIMilarity index (SSIM). They are defined as follows: where and are the ideal image and the restored image of order respectively, and where and are averages of and , respectively. and are the variance of and , respectively. is the covariance of and . The positive constants and can be thought of as stabilizing constants for near-zero denominator values. In general, a high PSNR-value or a small ReErr-value indicates that the restoration is more accurate. The SSIM is a well-known quality metric used to measure the similarity between two images. This method developed by Wang et al. [36] is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. The whiter SSIM map, the closer two images are. Therefore, we use the SSIM map to reveal areas of high or low similarity between two images in this work.

We note that a straightforward high-order total variation regularization scheme for multiplicative noise removal is to solve the following optimization problem: where is a positive regularization parameter which measures the trade-off between a good fit and a regularized solution. The Euler-Lagrange equation for minimization problem (45) amounts to solve (omitting the boundary conditions): Hence, the time-marching (TM) algorithm solving the high-order total variation model is as follows: where the parameter is introduced to avoid divisions by zero.

In the first test, we compare our method with the straightforward high-order total variation regularization scheme. In the time-marching algorithm for solving the straightforward high-order total variation regularization problem, we set the step size as 0.1 in order to obtain a stable iterative procedure. The algorithm is stopped when the maximum number of iterations is reached. In addition, we carry out many experiments with different -values in the model (45), the ones with the best results are presented in this work. For the proposed high-order total variation method (HighTV method), we terminate the iterations for the method and accept as the computed approximation of the ideal image as soon as the maximum number of allowed outer iterations has been carried out or the relative differences between consecutive iterates satisfy In the proposed HighTV regularization method, there are two regularization parameters. We know that the quality of the restored image is highly depended on the regularization parameters. Similar to [19], we fix in the proposed method in order to reduce the computational time in search for good regularization parameters. We determine the best value of such that the relative error of such a restored image with respect to such an ideal image is the smallest.

In the first experiment, we consider the “Cameraman” image of size . The original image in Figure 1(a) is distorted by the Gamma noise with and . The noisy images are shown in Figures 1(b) and 1(c). It is seen from Figures 1(b) and 1(c) that the smaller is, the more noisy the observed images are. When , the relative error (ReErr) between the noisy image and the original image is 0.3150. When , the ReErr between the noisy image and the original image is 0.2234.

In Figures 2(a)–2(d), we show the restoration results of the two different methods for and . According to the figures, we see that the proposed algorithm can provide better visual quality of restored images than those by the time-marching algorithm. In Figures 3(a) and 3(b), we compare the speed of convergence of the two algorithms for and . In the figures, the -axis is the number of iterations and the -axis is the the relative error between the restored image and the original image. We see that the speed of convergence of the proposed algorithm is faster and the relative error of the proposed algorithm is smaller. In Table 1, we compare the results of the computational time and the the relative error when the best restored images are obtained. From Table 1, we observe that the computational time required by our method is lesser than that of the time-marching algorithm.

In the following, we compare the proposed HighTV method with the one proposed in [18] (AA method), the one proposed in [19] (HNW method) and the one proposed in [20] (MIDAL method). For the AA method, we use the time-marching algorithm with to solve the minimization models as proposed in [18]. Similarly, we set the step size as 0.1 in order to obtain a stable iterative procedure. The algorithm is stopped when the maximum number of iterations is reached. In addition, we carry out many experiments with different -values in the model (3), the ones with the best results are presented in this work. For the HNW and MIDAL methods, we use the same stopping rule as the HighTV method.

For the MIDAL method, the authors have verified experimentally that setting two parameter be equal yields good results. Therefore, we only select by searching for the value leading to the smallest relative error of such a restored image with respect to the ideal image. In the HNW method, there are two regularization parameters. In [19], the authors fix in all tests in order to reduce the computational time in search for good regularization parameters. We use the best value of such that the relative error of such a restored image with respect to such an ideal image is the smallest. In this way, we have a fair comparison since we compare about the best restorations for four different methods.

The aim of the second test is to give evidence of the effectiveness of employing the propose algorithm over the other three algorithms for multiplicative noise removal problems. Especially, we show that the proposed method can reduce the staircase effect. In this test, we consider the well-known “Lena” image of size . The original image in Figure 4(a) is distorted by the Gamma noise with and . The noisy images are shown in Figures 4(b) and 4(c). When , the PSNR and the relative error (ReErr) between the noisy image and the original image are 11.95 dB and 0.5768. When , the PSNR and the ReErr between the noisy image and the original image are 18.37 dB and 0.2765.

The restored images by the AA method, the HNW method, the MIDAL method and the proposed method are shown in Figures 5(a)–5(h). From these figures, compared with the AA method, the HNW method and the MIDAL method, the proposed approach yields better results in image restoration since it avoids the staircase effect of the general total variation methods while at same time preserving edges as well as the total variation methods. In Figure 6, we have enlarged some details of the eight restored images. It is clear that the models involving the general total variation transform smooth regions into piecewise constant regions. As it is seen in the zoomed parts, the proposed method outperforms another three methods since the proposed model process smooth regions better than another three models. For the comparison of the performance quantitatively and the computational efficiency, in Table 2, we give their restoration results in SNRs and ReErrs. We observe from Table 2 that both the SNR and ReErr values of the restored images by the proposed method are better than those by the AA method, HNW method and the MIDAL method.

In the third test, the “barche” image is considered. The original image in Figure 7(a) is distorted by the Gamma noise with and , see Figures 7(b) and 7(c). When , the PSNR and the ReErr between the noisy image and the original image are 13.20 dB and 0.4090. When , the PSNR and the ReErr between the noisy image and the original image are 17.44 dB and 0.2514.

The information of restored images by the AA method, the HNW method, the MIDAL method and the proposed method are displayed in Figures 8(a)–8(h). From the visual quality of restored images, the proposed regularization method is quite competitive with the other three algorithms. In addition, in Figures 9(a)–9(h) we show the SSIM maps of the restored images. It is not difficult to observe that the SSIM maps by the proposed method are whiter than the AA method, the HNW method and the MIDAL method, which means that the proposed method behaves slightly better. The restoration results of these four different methods in SNRs, ReErrs are also presented in Table 2. It is shown from Table 2 that both the SNR and ReErr values of the restored images by the proposed method are better than those by the AA method, the HNW method and the MIDAL method.

#### 6. Conclusions

In this paper, we investigate a fast and efficient way to restore blurred and noisy images with a high-order total variation minimization technique. An alternating minimization algorithm is employed to solve the high-order total variation model. The edges in the restored image can be preserved quite well and the staircase effect is reduced effectively in the proposed algorithm. Our experimental results show that the performance of the proposed method is superior to that of some existing total variation restoration methods.

#### Acknowledgments

This research is supported by NSFC (10871034, 61170311), Sichuan Province Sci. & Tech. Research Project (2011JY0002, 12ZC1802), Jiangsu Province Research Project (12KJD110001) and Research Grant KQ12013 from HuaiHai Institute of Technology.

#### References

- L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,”
*Physica D*, vol. 60, no. 1–4, pp. 259–268, 1992. View at Scopus - A. Chambolle and P.-L. Lions, “Image recovery via total variation minimization and related problems,”
*Numerische Mathematik*, vol. 76, no. 2, pp. 167–188, 1997. View at Publisher · View at Google Scholar · View at MathSciNet - M. Bertero and P. Boccacci,
*Introduction to Inverse Problems in Imaging*, Institute of Physics Publishing, London, UK, 1998. View at MathSciNet - T. F. Chan and J. Shen,
*Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods*, SIAM, Philadelphia, Pa, USA, 2005. View at MathSciNet - A. Chambolle, “An algorithm for total variation minimization and applications,”
*Journal of Mathematical Imaging and Vision*, vol. 20, no. 1-2, pp. 89–97, 2004. View at Publisher · View at Google Scholar · View at MathSciNet - T. F. Chan, S. Esedoglu, and F. E. Park, “Image decomposition combining staircase reduction and texture extraction,”
*Journal of Visual Communication and Image Representation*, vol. 18, no. 6, pp. 464–486, 2007. View at Publisher · View at Google Scholar · View at Scopus - S. Hu, Z. Liao, and W. Chen, “Image denoising based on dilated singularity prior,”
*Mathematical Problems in Engineering*, vol. 2012, Article ID 767613, 15 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - S. Chountasis, V. N. Katsikis, and D. Pappas, “Digital image reconstruction in the spectral domain utilizing the Moore-Penrose inverse,”
*Mathematical Problems in Engineering*, vol. 2010, Article ID 750352, 14 pages, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - S. Becker, J. Bobin, and E. J. Candès, “NESTA: a fast and accurate first-order method for sparse recovery,”
*SIAM Journal on Imaging Sciences*, vol. 4, no. 1, pp. 1–39, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,”
*IEEE Transactions on Image Processing*, vol. 18, no. 11, pp. 2419–2434, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - Y. L. Wang, J. F. Yang, W. T. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,”
*SIAM Journal on Imaging Sciences*, vol. 1, no. 3, pp. 248–272, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - L. Xiao, L. L. Huang, and Z. H. Wei, “A weberized total variation regularization-based image multiplicative noise removal algorithm,”
*Eurasip Journal on Advances in Signal Processing*, vol. 2010, Article ID 490384, 2010. View at Publisher · View at Google Scholar · View at Scopus - S. Durand, J. Fadili, and M. Nikolova, “Multiplicative noise removal using L1 fidelity on frame coefficients,”
*Journal of Mathematical Imaging and Vision*, vol. 36, no. 3, pp. 201–226, 2010. View at Publisher · View at Google Scholar · View at Scopus - P. J. Green, “Reversible jump Markov chain Monte Carlo computation and Bayesian model determination,”
*Biometrika*, vol. 82, no. 4, pp. 711–732, 1995. View at Publisher · View at Google Scholar · View at MathSciNet - C. Oliver and S. Quegan,
*Understanding Synthetic Aperture Radar Images*, SciTech Publishing, 2004. - J. Goodman, “Some fundamental properties of speckle,”
*Journal of the Optical Society of America*, vol. 66, pp. 1145–1150, 1976. View at Publisher · View at Google Scholar - L. Rudin, P.-L. Lions, and S. Osher, “Multiplicative denoising and deblurring: theory and algorithms,” in
*Geometric Level Sets in Imaging, Vision and Graphics*, S. Osher and N. Paragios, Eds., pp. 103–119, Springer, New York, NY, USA, 2003. View at Publisher · View at Google Scholar · View at MathSciNet - G. Aubert and J.-F. Aujol, “A variational approach to removing multiplicative noise,”
*SIAM Journal on Applied Mathematics*, vol. 68, no. 4, pp. 925–946, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - Y.-M. Huang, M. K. Ng, and Y.-W. Wen, “A new total variation method for multiplicative noise removal,”
*SIAM Journal on Imaging Sciences*, vol. 2, no. 1, pp. 20–40, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - J. M. Bioucas-Dias and M. A. T. Figueiredo, “Multiplicative noise removal using variable splitting and constrained optimization,”
*IEEE Transactions on Image Processing*, vol. 19, no. 7, pp. 1720–1730, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - G. Steidl and T. Teuber, “Removing multiplicative noise by Douglas-Rachford splitting methods,”
*Journal of Mathematical Imaging and Vision*, vol. 36, no. 2, pp. 168–184, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - T. Chan, A. Marquina, and P. Mulet, “High-order total variation-based image restoration,”
*SIAM Journal on Scientific Computing*, vol. 22, no. 2, pp. 503–516, 2000. View at Publisher · View at Google Scholar · View at MathSciNet - M. Lysaker, A. Lundervold, and X. C. Tai, “Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time,”
*IEEE Transactions on Image Processing*, vol. 12, no. 12, pp. 1579–1590, 2003. View at Publisher · View at Google Scholar · View at Scopus - G. Steidl, “A note on the dual treatment of higher-order regularization functionals,”
*Computing*, vol. 76, no. 1-2, pp. 135–148, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - B. Chen, J.-L. Cai, W.-S. Chen, and Y. Li, “A multiplicative noise removal approach based on partial differential equation model,”
*Mathematical Problems in Engineering*, vol. 2012, Article ID 242043, 14 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - J. Zhang, Z. H. Wei, and L. Xiao, “Adaptive fractional-order multi-scale method for image denoising,”
*Journal of Mathematical Imaging and Vision*, vol. 43, no. 1, pp. 39–49, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based norm regularization for image restoration with biomedical applications,”
*IEEE Transactions on Image Processing*, vol. 21, no. 3, pp. 983–995, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - H.-Z. Chen, J.-P. Song, and X.-C. Tai, “A dual algorithm for minimization of the LLT model,”
*Advances in Computational Mathematics*, vol. 31, no. 1–3, pp. 115–130, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - Y. Hu and M. Jacob, “Higher degree total variation (HDTV) regularization for image recovery,”
*IEEE Transactions on Image Processing*, vol. 21, no. 5, pp. 2559–2571, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - F. Li, C. M. Shen, J. S. Fan, and C. L. Shen, “Image restoration combining a total variational filter and a fourth-order filter,”
*Journal of Visual Communication and Image Representation*, vol. 18, no. 4, pp. 322–330, 2007. View at Publisher · View at Google Scholar · View at Scopus - M. Lysaker and X. C. Tai, “Iterative image restoration combining total variation minimization and a second-order functional,”
*International Journal of Computer Vision*, vol. 66, no. 1, pp. 5–18, 2006. View at Publisher · View at Google Scholar · View at Scopus - K. Papafitsoros and C. B. Schönlieb, “A combined first and second order variational approach for image reconstruction,” http://arxiv.org/abs/1202.6341.
- D. P. Bertsekas, A. Nedic, and E. Ozdaglar,
*Convex Analysis and Optimization*, Athena Scientific, Belmont, Mass, USA, 2003. View at MathSciNet - C. L. Byne, “Alternating minimization as sequential unconstrained minimization: a survey,”
*Journal of Optimization Theory and Applications*, vol. 156, no. 3, pp. 554–566, 2012. - H. H. Bauschke, P. L. Combettes, and D. Noll, “Joint minimization with alternating Bregman proximity operators,”
*Pacific Journal of Optimization*, vol. 2, no. 3, pp. 401–424, 2006. View at MathSciNet - Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,”
*IEEE Transactions on Image Processing*, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus