Abstract

Multiplicative noise removal problem has received considerable attention in recent years. The total variation regularization method for the solution of the noise removal problem can preserve edges well but has the sometimes undesirable staircase effect. In this paper, we propose a fast high-order total variation minimization method to restore multiplicative noisy images. The proposed method is able to preserve edges and at the same time avoid the staircase effect in the smooth regions. An alternating minimization algorithm is employed to solve the proposed high-order total variation minimization problem. We discuss the convergence of the alternating minimization algorithm. Some numerical results show that the proposed method gives restored images of higher quality than some existing multiplicative noise removal methods.

1. Introduction

Image denoising problem has been widely studied in the areas of image processing. The problem includes additive noise removal and multiplicative noise removal. Most of the literature deals with the additive noise model [14]. Given a noisy image , where is the original image and is the noise, the denoising problem is to recover from the observed image . The additive noise model has been extensively studied. We refer the reader to [511] and references therein for a review of various methods.

In this paper, we consider the problem of seeking the original image from a multiplicative noisy image. In the multiplicative noise model, the recorded image is the multiplication of the original image and the noise : This problem arises in ultrasound imaging, synthetic aperture radar (SAR) and sonar (SAS), laser imaging, and magnetic field inhomogeneity in MRI [12, 13]. In this work, we concentrate on the assumption that follows a Gamma distribution, which commonly occurs in SAR. With respect to a given resolution cell of the imaging device, the complex SAR image is computed after the SAR system receives the coherent sum of reflected monochromatic microwaves. The complex SAR image at one point is usually modeled by a complex zero-mean circular Gaussian density (i.e., the real and imaginary parts are independent Gaussian variable with a common variance). The intensity image is defined as the square of the magnitude of the corresponding complex SAR image; its variables follow exponentially distribution independently [14, 15]. Since the complex observation is acquired in highly coherent system, the intensity image occurs to have peculiar granular appearance (known as speckle). The conventional SAR system reduces the speckle effect of the intensity image by the multilooking process (-look, in the case of -looks), that is, averaging the observed intensity images from slightly different angles of the same resolution cell [16].

Due to the coherent nature of these image acquisition processes, nearly all the information of the original image may vanish when it is distorted by the multiplicative noise. In general, the standard additive denoising method, so prevalent in image processing, is inadequate. Therefore, it is necessary to devise efficient and reliable algorithms for recovering the true image from the observed multiplicative noisy image. Until the past decade, a few variational approaches devoted to multiplicative noise removal have been proposed. The early variational approach for multiplicative noise removal is the one by Rudin et al. [17]. According to the statistical properties of the multiplicative noise , the recovery of the image was based on solving the following constrained optimization problem [17]: where is denoted by the seminorm on boundary variation (BV) space that coincides with when is smooth. The two constraints state that the mean of the noise is equal to 1, and the variance is equal to . In (2), only basic statistical properties, the mean, and the variance of the noise are considered, which somehow limits the restored results.

By using a maximum a posteriori (MAP) estimator, Aubert and Aujol [18] proposed a function whose minimizer corresponds to the denoised image to be recovered. This function is where the total variation of is utilized as the regularization term, and is the regularization parameter which controls the trade-off between a good fit of and a smoothness requirement due to the total variation regularization. Although the functional they proposed is not convex, they still proved the existence of a minimizer and showed the capability of their model on some numerical examples.

As a result of the drawback of the objective function (3) that is not convex for all , the obtained solution is likely not the global optimal solution of (3). To overcome this problem, Huang et al. [19] proposed and studied a strictly convex objective function for multiplicative noise removal problem. In [19], the authors introduced an auxiliary variable in (3). With the new variable, the proposed unconstrained total variation denoising problem is described as follows: where and are positive regularization parameters. The parameter measures the trade-off between an image obtained by a maximum likelihood estimation from the first term and a total variation denoised image . The parameter measures the amount of regularization to a denoised image . The main advantage of the proposed method is that the total variation norm can be used in the noise removal process in an efficient manner. Therefore the method proposed in [19] has the ability to preserve edges very well in the denoised image.

Recently, Bioucas-Dias and Figueiredo [20] proposed an efficient multiplicative noise removal method by using variable splitting and constrained optimization. After converting the multiplicative noise model into an additive one by taking logarithms, they used variable splitting to obtain an equivalent constrained problem and then solved this optimization problem by using the augmented Lagrangian method. A set of experiments has shown that the proposed method, which they named MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance. In [21], Steidl and Teuber considered a variational restoration model consisting of the -divergence as data fitting term and the total variation seminorm or nonlocal means as regularizer for removing multiplicative noise. They proposed to compute the minimizers of their restoration functionals by using Douglas-Rachford splitting techniques. For a particular splitting, they presented a semi-implicit scheme to solve the involved nonlinear systems of equations and proved its -linear convergence.

It is known that the total variation denoising method is a PDE-based technique that preserves edges well but has the sometimes undesirable staircase effect, namely, the transformation of smooth regions (ramps) into piecewise constant regions (stairs) [2226]. Therefore, the approaches involving the traditional total variation often cause staircase effect since they favor solutions that are piecewise constant. Staircase solutions fail to satisfy the evaluation of visual quality and they can develop false edges that do not exist in the true image. To attenuate the staircase effect, there is a growing interest in the literature for replacing the total variation norm by the high-order total variation norm. The motivation behind such attempt is to restore potentially a wider of images, which comprise more than merely piecewise constant regions. The majority of the high-order norms involve second-order differential operators because piecewise-vanishing second-order derivatives lead to piecewise-linear solutions that better fit smooth intensity changes; see [27] for more details. There are two classes of second-order regularization methods for image restoration problems. The first class employs a second-order regularizer in a standalone way. For example, in [28], the authors considered a fourth-order partial differential equations (PDE) model for noise removal and employed the dual algorithm of Chambolle for solving the high-order problems. In [29], Hu and Jacob applied higher degree total variation (HDTV) regularization for image recovery. The second class is to combine the TV norm with a second-order regularizer. For example, a technique in [30, 31] combining the TV filter with a fourth-order PDE filter was proposed to preserve edges and at the same time avoid the staircase effect in smooth regions for noise removal. In [32], Papafitsoros and Schönlieb considered a high-order model involving convex functions of the TV and the TV of the first derivatives for image restoration problems and used the split Bregman method to solve numerically the corresponding discretized problem.

In this paper, we present a fast high-order total variation minimization method for multiplicative noise removal problem. This technique substantially reduces the staircase effect, while preserving sharp jump discontinuities (edges). An alternating minimization algorithm is employed to solve the proposed high-order total variation minimization model. We discuss the convergence of the proposed alternating minimization algorithm. Some numerical results show that the proposed method gives restored images of higher quality than some existing multiplicative noise removal methods.

The organization of this paper is outlined as follows. In the next section, we present the high-order total variation minimization method for multiplicative noise removal problem. In Section 3, an alternating minimization algorithm is employed to find the minimizer of the proposed minimization problem. We analyze the convergence of the proposed alternating minimization algorithm in Section 4. Some numerical experiments are given to illustrate the performance of the proposed algorithm in Section 5. Concluding remarks are given in the last section.

2. A High-Order Total Variation Multiplicative Denoising Model

In this section, our aim is to propose a strictly convex objective function for restoring images distorted by the multiplicative noise. The most important is that we apply a high-order total variation to the objective function to achieve sharp edges and staircase reduction efficiently. We introduce our multiplicative denoising model from the statistical perspective using the Bayesian formula.

Let , , and denote samples of instances of some random variables ,  , and . Moreover, we assume that the random variable on each pixel is mutually independent and identically distributed (i.i.d). Moreover, the random variable in each pixel follows Gamma distribution; that is, its probability density function is where is the usual Gamma-function, and and denote the shape scale and inverse parameters in the Gamma distribution, respectively. We note that the mean of a gamma-distributed variable is and the variance is . As the multiplicative noise, in this work we assume that the mean of equals 1, and then we have .

According to the posteriori estimation, the restored image can be determined by From the Bayes rule, we have . It then follows that Using the proposition in [18]: , we obtain Taking the logarithm transformation into account, we assume that the image prior is given by the following: with where and are two positive scalars and is a high-order total variation of . Here we assume that the difference between and follows a Gaussian distribution, and follows a high-order total variation prior. Thus, maximizing amounts to minimizing the log-likelihood: Since is constant, (6) can be rewritten as the following optimization problem: The previous computation leads us to propose the following high-order total variation model for restoring images corrupted by the Gamma noise by introducing a new variable : where and are positive regularization parameters. In the above model, denotes the Hessian of , and is just the Frobenius norm of the Hessian . The parameter measures the trade-off between an image obtained by a maximum likelihood estimation from the first term and a high-order total variation denoised image . The parameter measures the amount of regularization to a denoised image . The main advantage of the proposed method is that the high-order total variation norm can be used in the noise removal process in an efficient manner. Therefore, the proposed method has the ability to preserve edges very well and reduce staircase effect in the denoised image.

Moreover, it is obvious that when includes an edge, also contains an edge at the same point; that is, the logarithm transformation preserves image edges; see [19] for more details. Based on this idea, we can view as an image in the logarithm domain. We can apply a high-order total variation regularization to to recover its edges and reduce staircase effect. It is interesting to note that in the proposed model can be positive, zero, or negative, and the corresponding is still positive, although in the objective function (3) should be positive. Especially, one can find that when in the proposed model is a large negative value, is just close to zero.

3. The Alternating Minimization Algorithm

In this section, an alternating minimization algorithm is employed to solve the proposed high-order total variation minimization problem efficiently. To solve (14), we need to consider the following two minimization subproblems. (i)For fixed, determine the solution of  (ii)For fixed, determine the solution of In the discrete setting, minimizing the first problem amounts to solve the following optimization problem: Straightforward computations can show that the minimizer of the above problem is the solution of the following nonlinear systems: There are decoupled nonlinear equations to be solved. The second derivative with respect to of the first term in (17) is equal to , which is always greater than zero. Here we assume that in the multiplicative noise model (1). Therefore, the first term of the objective function is strictly convex for all . Hence, we know that the objective function (17) is also strictly convex. Therefore, the corresponding nonlinear equation (18) has a unique solution and the solution can be determined by using the Newton method very efficiently.

Subproblem (16) is a high-order total variation regularization process for image denoising. Some effective algorithms for the minimizer of the high-order total variation problem have been proposed in the literature. For example, a lot of PDE-based methods are widely used to preserve edges and reduce the staircase effect for the high-order total variation problem; see [6, 22, 23, 30] for more details. In particular, inspired by the work from Chambolle [5], a dual algorithm and its convergence analysis for minimization of the high-order model are presented in [24, 28]. The algorithm overcomes the numerical difficulties related to the nondifferentiability of the high-order model. Some numerical results show that the convergence of the dual algorithm is faster than the gradient descent time-marching algorithm.

In the present paper, we employ the dual algorithm proposed by [24, 28] for the high-order total variation denoising problem (16). The discrete version of problem (16) is represented as In the dual algorithm, we need to solve the following minimization problem: where and with for . This problem can be solved by a semi-implicit fixed point iteration: where , , and is the step size. The operator is defined as . Here, ; see [24, 28] for more details. After the minimizer of the constrained optimization problem in (20) is determined, the denoised image can be computed by . In addition, it is shown that if , then is convergent and with the dual method is the solution of the minimization problem (20) as .

Starting from an initial guess , the alternating minimization method computes a sequence of iterates such that for .

We are now in a position to describe the alternating minimization algorithm for the multiplicative noise removal problem.

Algorithm 1. The alternating minimization algorithm for multiplicative noise removal:
Choose an initial guess , and .(1) For fixed, employ the Newton iterative method to compute (2) For fixed, use a fixed point iteration to compute and obtain (3) Check the stopping criteria. If a stopping criterion is satisfied, then exit with an approximate restored image ; otherwise, let and go to step .

4. Convergence Analysis

The main aim of this section is to analyze the convergence of the proposed alternating minimization algorithm. We first remark that in (14) is strictly convex, as the sum of a convex function and of two strictly convex functions; see [33] for more details. Hence, it suffices to show that there exists a unique minimizer of the objective function . We denote by the minimum value of . In the following we show that our alternating minimization algorithm gives asymptotically the solution of the discrete problem associated with (14). We have the following theorem.

Theorem 2. For any initial guess , generated by Algorithm 1 converges to a minimizer of the objective function as .

Proof. It follows from the alternating iterative process in Algorithm 1 that
It is obvious that the sequence is nonincreasing. We note that it is bounded from below by the minimum value . Let , and . Since and are convex, is convex and differentiable and is the associated Bregman distance, we know from [34, 35] that has the five-point property. Hence, thus converges in to . So we have We note that if the objective function is coercive, the sequence is bounded since the sequence converges.
Now we need to show that the objective function is coercive. Let , , , and be the second order difference matrices in the , , , and direction, respectively, and It is not difficult to obtain the lower bound of the discrete high-order total variation as follows: Let and . Denote If , we obtain . As , it is easy to show that On the other hand, we obtain since it is a strictly convex function. By using the above inequality, we have Hence, if with , it is not difficult to obtain that . Thus, we get that also tends to infinity. It follows from the definition of coercive that is coercive.
Since the sequence is bounded, we can thus extract a convergent subsequence from such that . Then we obtain . Moreover, we have, for all and all , and for all and all ,
Let us denote by a cluster point of . We may immediately obtain from (27) that Thus, we have that . We conclude that since and are the minimizers of the strictly convex function defined in (23). Hence, . Following a similar analysis, we can show that . By passing to the limit in (35) and (36), we have We know that (38) can be rewritten as follows: From the definition of , (39) is equivalent to the following: where and . The subdifferential of at is given by Therefore, we have which implies that is the minimizer of . Hence the whole sequence converges towards , the unique minimum of . We deduce that the sequence converges to , the minimizer of as tends to infinity.

5. Numerical Experiments

In this section we provide numerical results from multiplicative noise removal problem to demonstrate the performance of the proposed regularization method. All computations of the present paper were carried out in Matlab 7.10. The results were obtained by running the Matlab codes on an Intel Core i3 CPU (2.27 GHz, 2.27 GHz) computer with RAM of 2048 M. The initial guess is chosen to be the noisy image in all tests.

The quality of the restoration results with different methods is compared quantitatively by using the Peak-Signal-to-Noise Ratio (PSNR), the relative error (ReErr) and Structural SIMilarity index (SSIM). They are defined as follows: where and are the ideal image and the restored image of order respectively, and where and are averages of and , respectively. and are the variance of and , respectively. is the covariance of and . The positive constants and can be thought of as stabilizing constants for near-zero denominator values. In general, a high PSNR-value or a small ReErr-value indicates that the restoration is more accurate. The SSIM is a well-known quality metric used to measure the similarity between two images. This method developed by Wang et al. [36] is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. The whiter SSIM map, the closer two images are. Therefore, we use the SSIM map to reveal areas of high or low similarity between two images in this work.

We note that a straightforward high-order total variation regularization scheme for multiplicative noise removal is to solve the following optimization problem: where is a positive regularization parameter which measures the trade-off between a good fit and a regularized solution. The Euler-Lagrange equation for minimization problem (45) amounts to solve (omitting the boundary conditions): Hence, the time-marching (TM) algorithm solving the high-order total variation model is as follows: where the parameter is introduced to avoid divisions by zero.

In the first test, we compare our method with the straightforward high-order total variation regularization scheme. In the time-marching algorithm for solving the straightforward high-order total variation regularization problem, we set the step size as 0.1 in order to obtain a stable iterative procedure. The algorithm is stopped when the maximum number of iterations is reached. In addition, we carry out many experiments with different -values in the model (45), the ones with the best results are presented in this work. For the proposed high-order total variation method (HighTV method), we terminate the iterations for the method and accept as the computed approximation of the ideal image as soon as the maximum number of allowed outer iterations has been carried out or the relative differences between consecutive iterates satisfy In the proposed HighTV regularization method, there are two regularization parameters. We know that the quality of the restored image is highly depended on the regularization parameters. Similar to [19], we fix in the proposed method in order to reduce the computational time in search for good regularization parameters. We determine the best value of such that the relative error of such a restored image with respect to such an ideal image is the smallest.

In the first experiment, we consider the “Cameraman” image of size . The original image in Figure 1(a) is distorted by the Gamma noise with and . The noisy images are shown in Figures 1(b) and 1(c). It is seen from Figures 1(b) and 1(c) that the smaller is, the more noisy the observed images are. When , the relative error (ReErr) between the noisy image and the original image is 0.3150. When , the ReErr between the noisy image and the original image is 0.2234.

In Figures 2(a)2(d), we show the restoration results of the two different methods for and . According to the figures, we see that the proposed algorithm can provide better visual quality of restored images than those by the time-marching algorithm. In Figures 3(a) and 3(b), we compare the speed of convergence of the two algorithms for and . In the figures, the -axis is the number of iterations and the -axis is the the relative error between the restored image and the original image. We see that the speed of convergence of the proposed algorithm is faster and the relative error of the proposed algorithm is smaller. In Table 1, we compare the results of the computational time and the the relative error when the best restored images are obtained. From Table 1, we observe that the computational time required by our method is lesser than that of the time-marching algorithm.

In the following, we compare the proposed HighTV method with the one proposed in [18] (AA method), the one proposed in [19] (HNW method) and the one proposed in [20] (MIDAL method). For the AA method, we use the time-marching algorithm with to solve the minimization models as proposed in [18]. Similarly, we set the step size as 0.1 in order to obtain a stable iterative procedure. The algorithm is stopped when the maximum number of iterations is reached. In addition, we carry out many experiments with different -values in the model (3), the ones with the best results are presented in this work. For the HNW and MIDAL methods, we use the same stopping rule as the HighTV method.

For the MIDAL method, the authors have verified experimentally that setting two parameter be equal yields good results. Therefore, we only select by searching for the value leading to the smallest relative error of such a restored image with respect to the ideal image. In the HNW method, there are two regularization parameters. In [19], the authors fix in all tests in order to reduce the computational time in search for good regularization parameters. We use the best value of such that the relative error of such a restored image with respect to such an ideal image is the smallest. In this way, we have a fair comparison since we compare about the best restorations for four different methods.

The aim of the second test is to give evidence of the effectiveness of employing the propose algorithm over the other three algorithms for multiplicative noise removal problems. Especially, we show that the proposed method can reduce the staircase effect. In this test, we consider the well-known “Lena” image of size . The original image in Figure 4(a) is distorted by the Gamma noise with and . The noisy images are shown in Figures 4(b) and 4(c). When , the PSNR and the relative error (ReErr) between the noisy image and the original image are 11.95 dB and 0.5768. When , the PSNR and the ReErr between the noisy image and the original image are 18.37 dB and 0.2765.

The restored images by the AA method, the HNW method, the MIDAL method and the proposed method are shown in Figures 5(a)5(h). From these figures, compared with the AA method, the HNW method and the MIDAL method, the proposed approach yields better results in image restoration since it avoids the staircase effect of the general total variation methods while at same time preserving edges as well as the total variation methods. In Figure 6, we have enlarged some details of the eight restored images. It is clear that the models involving the general total variation transform smooth regions into piecewise constant regions. As it is seen in the zoomed parts, the proposed method outperforms another three methods since the proposed model process smooth regions better than another three models. For the comparison of the performance quantitatively and the computational efficiency, in Table 2, we give their restoration results in SNRs and ReErrs. We observe from Table 2 that both the SNR and ReErr values of the restored images by the proposed method are better than those by the AA method, HNW method and the MIDAL method.

In the third test, the “barche” image is considered. The original image in Figure 7(a) is distorted by the Gamma noise with and , see Figures 7(b) and 7(c). When , the PSNR and the ReErr between the noisy image and the original image are 13.20 dB and 0.4090. When , the PSNR and the ReErr between the noisy image and the original image are 17.44 dB and 0.2514.

The information of restored images by the AA method, the HNW method, the MIDAL method and the proposed method are displayed in Figures 8(a)8(h). From the visual quality of restored images, the proposed regularization method is quite competitive with the other three algorithms. In addition, in Figures 9(a)9(h) we show the SSIM maps of the restored images. It is not difficult to observe that the SSIM maps by the proposed method are whiter than the AA method, the HNW method and the MIDAL method, which means that the proposed method behaves slightly better. The restoration results of these four different methods in SNRs, ReErrs are also presented in Table 2. It is shown from Table 2 that both the SNR and ReErr values of the restored images by the proposed method are better than those by the AA method, the HNW method and the MIDAL method.

6. Conclusions

In this paper, we investigate a fast and efficient way to restore blurred and noisy images with a high-order total variation minimization technique. An alternating minimization algorithm is employed to solve the high-order total variation model. The edges in the restored image can be preserved quite well and the staircase effect is reduced effectively in the proposed algorithm. Our experimental results show that the performance of the proposed method is superior to that of some existing total variation restoration methods.

Acknowledgments

This research is supported by NSFC (10871034, 61170311), Sichuan Province Sci. & Tech. Research Project (2011JY0002, 12ZC1802), Jiangsu Province Research Project (12KJD110001) and Research Grant KQ12013 from HuaiHai Institute of Technology.