Abstract

This paper focuses on the problem of multiplicative noise removal. Using a gray level indicator, we derive a new functional which consists of the adaptive total variation term and the global convex fidelity term. We prove the existence, uniqueness, and comparison principle of the minimizer for the variational problem. The existence, uniqueness, and long-time behavior of the associated evolution equation are established. Finally, experimental results illustrate the effectiveness of the model in multiplicative noise reduction. Different from the other methods, the parameters in the proposed algorithms are found dynamically.

1. Introduction

Multiplicative noise occurs while one deals with active imaging system, such as laser images, microscope images, and SAR images. Given a noisy image , where is a bounded open subset of , we assume where is the true image and is the noise. In what follows we will always assume that and . Some prior information about the mean and variance of the multiplicative noise is as follows: where . Our purpose is to remove noise in images while preserving the maximum information about .

The goal of this paper is to propose a globally strictly convex functional well-adapted to removing multiplicative noise, which is as follows: where is noisy image and stands for the adaptive total variation of . In the new model, we introduce a control factor, , which controls the speed of diffusion at different region: at the low gray level (), the speed is slow; at the high gray level (), the speed is fast. The second term in the functional, that is, the fidelity term is global convex, which implies the constraint (2).

Various adaptive filters for multiplicative noise removal have been proposed. In the beginning, variational methods for multiplicative noise reduction deal with the Gaussian multiplicative noise [1]. In actual life, the speckle noise [2] is more widespread, such as synthetic aperture radar (SAR) imagery [3, 4]. Then, Aubert and Aujol [5] propose a new variational method for Gamma multiplicative noise reduction. By the logarithm transform, a large variety of methods rely on the conversion of the multiplicative noise into additive noise.

1.1. Gaussian Noise

In the additive noise case, the most classical assumption is that the noise is a white Gaussian noise. So one case in which dealing with multiplicative noise is the white Gaussian noise. Using the framework in [6], Rudin et al. [1] consider the following optimization problem for Gaussian multiplicative noise reduction: where stands for the total variation of , and is a fidelity term which consists of two integrals with two Lagrange multipliers:

In order to make sure the two constraints (2) and (3) are always satisfied during the evolution, by the gradient projection method, the authors evolve the following evolution equation:

If by the gradient projection method the values of and are found dynamically, the method is not always convex; while if are fixed, the corresponding minimization problem will lead to a sequence of constant function approaching .

1.2. Gamma Noise

Generally, the speckle noise is treated as Gamma noise with mean equal to one. The probability distribution function of the noise takes the following form:

Gamma noise is more complex than Gaussian noise [2]. Based on maximum a posteriori (MAP) on , Aubert and Aujol [5] assume that the noise follows a Gamma probability distribution with mean equal to one and following a Gibbs prior and then derive a functional formulating the following minimization problem (called AA model): where . The new fidelity term is strictly convex for . The authors prove the existence of a minimizer to the minimization problem and derive existence and uniqueness results of the solution to the associated evolution equation:

1.3. The Model Based on the Logarithm Transform

The simplest idea is to take the log of both sides of (1), which essentially converts the multiplicative problem into an additive one. If the distribution of the noise takes the form (8), the expectation and the variance of the log-noise are where is the polygamma function [7]. In [8], Shi and Osher use relaxed inverse scale space (RISS) flows to deal with various noises and provide iterative TV regularization. First, using the log-data , they propose the following model: where . The corresponding RISS flow reads with , , for . Then, they generalize several multiplicative noise models [5, 9, 10], and for convergence reasons, use iterative TV regularization on to obtain the RISS flow. And they investigate the properties of the flow and study the dependence on flow parameters.

1.4. The Model Based on the Exponent Transform

Recently, Huang et al. [11] utilize an exponential transformation in the fidelity term of AA model to propose the following denoising model: where and are regularization parameters, and is an auxiliary variable. Next, they further develop an alternating minimization algorithm for the model (16) by incorporating another way of modified TV regularization in [12]. However, the mathematical analysis to the variational problem (16) is not given in [11]. By the exponential transformation , Jin and Yang [13] change the fidelity term of AA model (2) into and then study the following denoising model:

Notice that the fidelity term is globally strictly convex. Based on this, they prove the uniqueness of the solutions to the variational problem (17) and show the existence and uniqueness of the weak solution for the following evolution equation corresponding to (17):

The paper is organized as follows. In Section 2, inspired from [1, 5, 14], based on the gray level indicator , we derive a convex adaptive total variation model (4) for multiplicative noise removal. In Section 3, we prove the existence and uniqueness of the solution of the minimization problem (4). In Section 4, we study the associated evolution problem to the minimization problem (4). Specifically, we define the weak solution of the evolution problem, derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as . In Section 5, we provide two numerical algorithms and experimental results to illustrate the effectiveness of our algorithms in image denoising. We also compare new algorithms with other ones.

2. A New Variational Model for Multiplicative Noise Removal

The goal of this section is to propose a new variational model for multiplicative noise removal. First, we proposed a new fidelity with global convexity, which can always satisfy the constraint (2) during the evolution. Then, by analyzing the properties of the noise, we propose the adaptive total variation based on a gray level indicator .

2.1. A Global Convex Fidelity Term

Based on the idea in [15], we can obtain the following fidelity:

Note that . Let us consider the following Euler-Lagrange equation for some functional-based problem: where and stand, respectively, for the gradient and the Hessian matrix of with respect to the space variable , and is divergence operator from the functional. By integrating (20) in space, using integration by parts and the boundary condition (21) in the sense of distributions, we have which implies the constraint (2). Moreover, using the idea in [6], the parameter can be calculated as

Remark 1. (1) It is easy to check that the function reaches its minimum value over for .
(2) Let us denote by
We have , and , which deduce that the new fidelity term is globally strictly convex.

2.2. The Adaptive Total Variation Model

Assume is a piecewise function, that is, , where , for , , and is the gray level. Moreover, we assume that the samples of the noise on each pixel are mutually independent and identically distributed (i.i.d.) with the probability density function . For , , and therefore, , where and are the variance of the noise image and the noise at the pixel , respectively. It is noticed that the variance of the noise is the constant on each pixel, but the variance of the noise image is influenced by the gray level. The higher the gray level is, the more remarkable the influence of the noise is. Especially, when , and therefore is noise free in this case. The fact is illustrious in Figure 1. It can be seen that in despite of the independent identically distribution of noise (see Figure 1(b)), the noise image shows different features on the pixel, where the gray levels are different (see Figure 1(d)).

In [14], Chan and Strong propose the following adaptive total variation model: where the weight function controls the speed of diffusion at different region. Utilizing this idea, we proposed a gray level indicator . The indicator has such properties as follows: is monotonically increasing, , , and , as . Therefore, we propose the following gray indicators : or where , , , and are parameters. With this choice, is a positive-valued continuous function; is much smaller value at low gray levels () than at high gray levels () so that some small features at low gray levels are much less smoothed and therefore are preserved. As previously stated, at high gray levels, the region of the noise image is degraded more, while the region at low gray levels is degraded less (see Figure 1(d)). Then, from (26)/(27), at the high gray levels, , the new model is more smooth at the region; at low gray levels, , the new model is less smooth at the region.

The previous analysis leads us to propose a convex adaptive total variation model for multiplicative noise removal,

The evolution of the Euler-Lagrange equation for (28) is as follows:

3. The Minimization Problem (28)

In this section, we study the existence and uniqueness of the solution to the minimization problem (28), and then we consider the comparison principle for the problem (28).

3.1. Preliminaries

If with , then

We always assume that is a bounded open subset of with Lipschitz boundary. As in [16], we introduce the following definition of -total variation.

Definition 2. A function has bounded -total variation in , if where

Remark 3 (see [16]). (1) If has bounded -total variation in , there is a Radon vector measure on such that
(2) From (32), having bounded -total variation in implies that .
Now, we directly show some lemmas on -total variation from [16].

Lemma 4. Assume that and in . Then, , and

Lemma 5. Assume that . Then, there exists a sequence such that

By a minor modification of the proof of Lemma 1 in Section 4.3 of [17], we can have the following lemma.

Lemma 6. Let , and let be the cut-off function defined by Then, , and

3.2. Existence and Uniqueness of the Problem (28)

In this subsection, we show that problem (28) has at least one solution in .

Theorem 7. Let with . Then, the problem (28) admits a unique solution such that

Proof. Let us rewrite From Remark 1(1), we have for with . This implies that has a lower bound for all with . Hence, there exists a minimizing sequence for the problem (28).
Step 1. Let us denote and . We first claim that .
In fact, we remark that is decreasing for and increasing for . Therefore, if , one always has Hence, if , we have Moreover, utilizing Lemma 6, we can have Combining (44) and (45), we deduce that On the other hand, in the same way, we get that . Therefore, we can assume that without restriction.
Step 2. Let us prove that there exists such that The proof in the Step 1 implies in particular that is bounded in . Since and , is bounded. Moreover, by the definition of , there exists a constant such that Then, which implies that the sequence is bounded in . Consequently, there exists a function and a subsequence , denoted by itself, such that as , Utilizing Lemma 4 and Fatou’s lemma, we get that is a solution of the problem (28).
Finally, from Remark 1(2), is strictly convex as , and then the uniqueness of the minimizer follows from the strict convexity of the energy functional in (28).

3.3. Comparison Principle

In this subsection, we state a comparison principle for problem (28).

Proposition 8. Let ,   with ,  , and . Assume that (resp., ) is a solution of the problem (28) for (resp., ). Then, a.e. in .

Proof. Let us denote , and .
From Theorem 7, It is noticed that there exist the solutions and for and , respectively. Since is a minimizer with data , for , Adding these two inequalities and by a minor modification of the facts in [18, 19], which is as follows: then we can deduce that Writing , we easily deduce that Since , the set is a zero Lebesgue measure, that is, a.e. in .

4. The Associated Evolution Equations (29)–(31)

In this section, by an approach from the theory used in both [16, 20], we define the weak solution of the evolution problems (29)–(31), derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as .

4.1. Definition of a Pseudosolution to the Problems (29)–(31)

Denote , . Suppose that with , and is a classical solution of (29)–(31) with . Multiplying (29) by , integrating over , we have Since and utilizing the Lagrange mean value theorem, for either or , we have Integrating over for any then yields

On the other hand, let in (59) with , , and . Then left-hand side of the inequality (59) has a minimum at . Hence, if satisfies (59) and with , , and , is also a solution of the problems (29)–(31) in the sense of distribution. Based on this fact which is similar to the ideas in [13, 16, 21], we give the following definition.

Definition 9. A function is called a pseudosolution of (29)–(31), if , , , and satisfies (59) for all , with and . Moreover,

4.2. The Approximating Problem for Problems (29)–(31)

In this subsection, we consider the approximating problem where , such that as , From Lemma 5, we can have the existence of .

Let us denote which is associate with the problems (61)–(63). Combining the proof of Theorem  3.4 in [16] with the proof of Theorem 7, we also have the following lemma.

Lemma 10. Let with . Then, there is a unique solution to the problem which is satisfying where .

Based on this fact, we have the following existence and uniqueness result for the problems (61)–(63).

Theorem 11. Let with and . Then, the problems (61)–(63) admit a unique pseudosolution with , that is, for any and with and . Moreover, and for any ,

Proof. Let us fix and introduce the following functions: We consider the auxiliary problem as follows: Since -Laplacian operator is a maximal monotone operator, using Galerkin method and Lebesgue convergence theorem, from standard results for parabolic equations [22], we get that the problems (72)–(74) admit a unique weak solution such that
Next, let us verify that the truncated function in the problems (72)–(74) can be omitted. Multiply (72) by , where and integrate over to get Then, Therefore, is decreasing in and since we have that for all , and so Then, is also the solution to the problems (61)–(63). Let us fix . Multiplying (61) by , where a similar argument yields that for all , and then (69) follows.
Moreover, multiplying (61) by and integrating it over yield for . Then (70) follows.
Finally, we will verify that the above weak solution will be the pseudosolution for the problems (61)–(63). Suppose that with and . Multiplying (61) by , then integrating by parts over , we obtain Utilizing Young’s inequality with yields for the second term in the left-hand side of the above inequality (84), we obtain Utilizing the Lagrange mean value theorem yields for either or . Returning to (84), we therefore conclude that is, for any . This fact implies that is a pseudosolution for the problems (61)–(63). The uniqueness follows by the uniqueness of the weak solution for the problems (61)–(63).

4.3. Existence and Uniqueness of the Problems (29)–(31)

In this subsection, we will prove the main theorem for the existence and uniqueness for the solution to the problems (29)–(31).

Theorem 12. Suppose with and . Then, there exists a unique pseudosolution to the problems (29)–(31). Moverover,

Proof
Step 1. First we fix and pass to the limit .
Let be the pseudosolution solution of (61)–(63). From (69)-(70), we know that, for fixed , is uniformly bounded in and is uniformly bounded in , that is, where is constant. Now we claim that there exists a sequence of functions and a function such that, as ,
In fact, from (91), there is a sequence and a function with such that (92) and (93) hold.
Note that, for any , as , which shows that, for each , By (69) and (70), for each , is a bounded sequence in . Hence, combining this with (98), we get that, for each as , which implies (94).
Since (96) follows.
To see (95), noting that from (100), is equicontinuous, and from (93) and (99), we have, for each , Then, a standard argument yields (95).
From (70) and (92), we also obtain that with . The claim is proved.
Next we show that, for all with and , and for each , By Theorem 11, we obtain that Notice that, using Lemma 4, we have Letting , in (103) and using (92), (94), and (104), we obtain (102) for all with and .
Step 2. Now it only remains to pass to the limit as in (102) to complete the existence of the solution to (29)–(31).
Replacing by in (70), letting (), and using (92)–(95), (104), and (64), we obtain Combining this with (69), we have
Then, by a processing similar to the one for getting (92)–(96), we can obtain a sequence of functions and a function such that, as , ,
Replacing by in (102), letting , , and using Lemma 4, we can have from (107)–(110) for all with and . Then, the existence of the pseudosolution to (29)–(31) is completed.
Moreover, replacing by , letting in (105), and using (107)–(110) and (64), we have
Finally, the uniqueness of pseudosolutions to the problems (29)–(31) follows as in [21, 23]. Let , be two pseudosolutions to (29)–(31) with . Then, we have Adding the above two inequalities, we get for all . This implies that a.e. in .

4.4. Long-Time Behavior

At last, we will show the asymptotic limit of the solution as .

Theorem 13. As , the pseudosolutions to the problems (29)–(31), , converges strongly in to the minimizer of the functional , that is, the solution of the problem (28).

Proof. Take a function with in (59), then As in [16], let Since with , for each , we have that with uniformly bounded in and . Then, there exists a subsequence of and a function , such that as , Since is uniformly bounded in , we have
By dividing in (116) and then taking the limit along , we get that, for any with , which implies that is the minimizer of the problem (28).

5. Numerical Methods and Experimental Results

We present in this section some numerical examples illustrating the capability of our model. We also compare it with the known model (AA). In the next two subsections, two numerical discrete schemes, the -total variation (-TV) scheme and the -Laplace approximate (-LA) scheme, will be proposed.

5.1. -Total Variation Scheme

Numerically we get a solution to the problem (28) by computing the associated equation (29) to a steady state. To discretize (29), the finite difference scheme in [6] is used. Denote the space step by and the time step by . Thus, we have where and is the regularized parameter chosen near 0.

The numerical algorithms for the problems (29)–(31) are given as follows:

Here the MATLAB function “conv2" is used to represent the two-dimensional discrete convolution transform of the matrix , that is, . Through the above lines, we can obtain by . The program will stop when it achieves our desire.

5.2. -Laplace Approximate Scheme

From the proof of Theorem 12, we can know that the term can be approximated by the term . Based on this, we can use the numerical algorithms for the problems (61)–(63) to get a solution to the problem (28), as . As in [24], the numerical discrete scheme for the problems (61)–(63) is given as follows: where , is upper limit of the exponent , , and is the iteration time.

5.3. Comparison with Other Methods

In this section, we used the similar way for numerical experiments as [25]. For comparison purposes, some very recent multiplicative noise removal algorithms from the literature are considered, such as the SO Algorithm [8] (see (16)-(17)) and the AA Algorithm [5] (see (10)). As recommended in [8], the stopping rule about SO Algorithm is to reach such that , where is the underlying log-image, and is the relevant noise, and the variance is defined by (16).

The denoising algorithms were tested on three images: a synthetic image ( pixels), an aerial image ( pixels), and a cameraman image ( pixels). In all numerical experiments by our algorithms, the images do not need to be normalized in the range . This is different from the other algorithms. For each image, a noisy observation is generated by multiplying the original image by the speckle noise according to the model in (8) with the choice .

For a noise-free image and its denoised version by any algorithm , the denoising performance is measured in terms of peak signal to noise ratio (PSNR) [25], and mean absolute-deviation error (MAE), where gives the gray-scale range of the original image, is the size of the image.

For fair comparison, the parameters of SO and AA were tweaked manually to reach their best performance level. Their values are summarized in Table 1. In the -total variation (-TV) scheme, there are four parameters: the influencing factor , the scale of convolution , the time step , and the turning factor . However, the parameter is found dynamically by (124). To ensure stability as well as optimal results, we choose . In -Laplace approximate (-LA) scheme, besides the same four parameters with -TV scheme, there is a new parameter with . We need not the the exact value of but an approximate estimate (e.g., ). Notice that the parameters of our method are very stable with respect to the image.

The results are depicted in Figures 2, 3, and 4 for the synthetic image, Figures 5, 6, and 7 for the aerial image, and Figures 8, 9, and 10 for the cameraman image. Our methods do a good job at restoring faint geometrical structures of the images even for low values of , see for instance the results on the aerial image for and . Our algorithm performs among the best and even outperforms its competitors most of the time both visually and quantitatively as revealed by the PSNR and MAE values. For SO method, the number of iterations necessary to satisfy the stopping rule rapidly increases when decreases [25]. For AA method, the appropriate parameter is necessary.

In the numerical experiments, we can see that for the nontexture image, our methods and AA method work well (see Figure 4); for the texture image, our methods and SO method work well (see Figure 7). The denoising performance results are tabulated in Table 2, where the best PSNR and MAE values are shown in boldface. The PSNR improvement brought by our approach can be quite high particularly for (see e.g., Figures 24), and the visual resolution is quite respectable. This is an important achievement since in practice has a small value, usually 1, rarely above 4. This improvement becomes less salient as increases which is intuitively expected. But even for , the PSNR of our algorithm can be higher than AA and SO methods (see Table 2).

Acknowledgments

The authors would like to express their sincere thanks to the referees for their valuable suggestions for the revision of the paper which contributed greatly to this work. The authors would also like to thank Jalal Fadili for providing them the MATLAB code of his algorithm. They would like to express their deep thanks to the referees for their suggestions while revising the paper, yet again. This work was partially supported by the Fundamental Research Funds for the Central Universities (Grant nos. HIT.NSRIF.2011003 and HIT.NSRIF.2012065), the National Science Foundation of China (Grant no. 11271100), the Aerospace Supported Fund, China, under Contract no. 2011-HT-HGD-06, China Postdoctoral Science Foundation funded project, Grant no. 2012M510933, and also the 985 project of Harbin Institute of Technology.