- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 912373, 21 pages
A Convex Adaptive Total Variation Model Based on the Gray Level Indicator for Multiplicative Noise Removal
Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China
Received 28 March 2013; Accepted 14 May 2013
Academic Editor: Guoyin Li
Copyright © 2013 Gang Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper focuses on the problem of multiplicative noise removal. Using a gray level indicator, we derive a new functional which consists of the adaptive total variation term and the global convex fidelity term. We prove the existence, uniqueness, and comparison principle of the minimizer for the variational problem. The existence, uniqueness, and long-time behavior of the associated evolution equation are established. Finally, experimental results illustrate the effectiveness of the model in multiplicative noise reduction. Different from the other methods, the parameters in the proposed algorithms are found dynamically.
Multiplicative noise occurs while one deals with active imaging system, such as laser images, microscope images, and SAR images. Given a noisy image , where is a bounded open subset of , we assume where is the true image and is the noise. In what follows we will always assume that and . Some prior information about the mean and variance of the multiplicative noise is as follows: where . Our purpose is to remove noise in images while preserving the maximum information about .
The goal of this paper is to propose a globally strictly convex functional well-adapted to removing multiplicative noise, which is as follows: where is noisy image and stands for the adaptive total variation of . In the new model, we introduce a control factor, , which controls the speed of diffusion at different region: at the low gray level (), the speed is slow; at the high gray level (), the speed is fast. The second term in the functional, that is, the fidelity term is global convex, which implies the constraint (2).
Various adaptive filters for multiplicative noise removal have been proposed. In the beginning, variational methods for multiplicative noise reduction deal with the Gaussian multiplicative noise . In actual life, the speckle noise  is more widespread, such as synthetic aperture radar (SAR) imagery [3, 4]. Then, Aubert and Aujol  propose a new variational method for Gamma multiplicative noise reduction. By the logarithm transform, a large variety of methods rely on the conversion of the multiplicative noise into additive noise.
1.1. Gaussian Noise
In the additive noise case, the most classical assumption is that the noise is a white Gaussian noise. So one case in which dealing with multiplicative noise is the white Gaussian noise. Using the framework in , Rudin et al.  consider the following optimization problem for Gaussian multiplicative noise reduction: where stands for the total variation of , and is a fidelity term which consists of two integrals with two Lagrange multipliers:
If by the gradient projection method the values of and are found dynamically, the method is not always convex; while if are fixed, the corresponding minimization problem will lead to a sequence of constant function approaching .
1.2. Gamma Noise
Generally, the speckle noise is treated as Gamma noise with mean equal to one. The probability distribution function of the noise takes the following form:
Gamma noise is more complex than Gaussian noise . Based on maximum a posteriori (MAP) on , Aubert and Aujol  assume that the noise follows a Gamma probability distribution with mean equal to one and following a Gibbs prior and then derive a functional formulating the following minimization problem (called AA model): where . The new fidelity term is strictly convex for . The authors prove the existence of a minimizer to the minimization problem and derive existence and uniqueness results of the solution to the associated evolution equation:
1.3. The Model Based on the Logarithm Transform
The simplest idea is to take the log of both sides of (1), which essentially converts the multiplicative problem into an additive one. If the distribution of the noise takes the form (8), the expectation and the variance of the log-noise are where is the polygamma function . In , Shi and Osher use relaxed inverse scale space (RISS) flows to deal with various noises and provide iterative TV regularization. First, using the log-data , they propose the following model: where . The corresponding RISS flow reads with , , for . Then, they generalize several multiplicative noise models [5, 9, 10], and for convergence reasons, use iterative TV regularization on to obtain the RISS flow. And they investigate the properties of the flow and study the dependence on flow parameters.
1.4. The Model Based on the Exponent Transform
Recently, Huang et al.  utilize an exponential transformation in the fidelity term of AA model to propose the following denoising model: where and are regularization parameters, and is an auxiliary variable. Next, they further develop an alternating minimization algorithm for the model (16) by incorporating another way of modified TV regularization in . However, the mathematical analysis to the variational problem (16) is not given in . By the exponential transformation , Jin and Yang  change the fidelity term of AA model (2) into and then study the following denoising model:
Notice that the fidelity term is globally strictly convex. Based on this, they prove the uniqueness of the solutions to the variational problem (17) and show the existence and uniqueness of the weak solution for the following evolution equation corresponding to (17):
The paper is organized as follows. In Section 2, inspired from [1, 5, 14], based on the gray level indicator , we derive a convex adaptive total variation model (4) for multiplicative noise removal. In Section 3, we prove the existence and uniqueness of the solution of the minimization problem (4). In Section 4, we study the associated evolution problem to the minimization problem (4). Specifically, we define the weak solution of the evolution problem, derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as . In Section 5, we provide two numerical algorithms and experimental results to illustrate the effectiveness of our algorithms in image denoising. We also compare new algorithms with other ones.
2. A New Variational Model for Multiplicative Noise Removal
The goal of this section is to propose a new variational model for multiplicative noise removal. First, we proposed a new fidelity with global convexity, which can always satisfy the constraint (2) during the evolution. Then, by analyzing the properties of the noise, we propose the adaptive total variation based on a gray level indicator .
2.1. A Global Convex Fidelity Term
Based on the idea in , we can obtain the following fidelity:
Note that . Let us consider the following Euler-Lagrange equation for some functional-based problem: where and stand, respectively, for the gradient and the Hessian matrix of with respect to the space variable , and is divergence operator from the functional. By integrating (20) in space, using integration by parts and the boundary condition (21) in the sense of distributions, we have which implies the constraint (2). Moreover, using the idea in , the parameter can be calculated as
Remark 1. (1) It is easy to check that the function reaches its minimum value over for .
(2) Let us denote by
We have , and , which deduce that the new fidelity term is globally strictly convex.
2.2. The Adaptive Total Variation Model
Assume is a piecewise function, that is, , where , for , , and is the gray level. Moreover, we assume that the samples of the noise on each pixel are mutually independent and identically distributed (i.i.d.) with the probability density function . For , , and therefore, , where and are the variance of the noise image and the noise at the pixel , respectively. It is noticed that the variance of the noise is the constant on each pixel, but the variance of the noise image is influenced by the gray level. The higher the gray level is, the more remarkable the influence of the noise is. Especially, when , and therefore is noise free in this case. The fact is illustrious in Figure 1. It can be seen that in despite of the independent identically distribution of noise (see Figure 1(b)), the noise image shows different features on the pixel, where the gray levels are different (see Figure 1(d)).
In , Chan and Strong propose the following adaptive total variation model: where the weight function controls the speed of diffusion at different region. Utilizing this idea, we proposed a gray level indicator . The indicator has such properties as follows: is monotonically increasing, , , and , as . Therefore, we propose the following gray indicators : or where , , , and are parameters. With this choice, is a positive-valued continuous function; is much smaller value at low gray levels () than at high gray levels () so that some small features at low gray levels are much less smoothed and therefore are preserved. As previously stated, at high gray levels, the region of the noise image is degraded more, while the region at low gray levels is degraded less (see Figure 1(d)). Then, from (26)/(27), at the high gray levels, , the new model is more smooth at the region; at low gray levels, , the new model is less smooth at the region.
The previous analysis leads us to propose a convex adaptive total variation model for multiplicative noise removal,
The evolution of the Euler-Lagrange equation for (28) is as follows:
3. The Minimization Problem (28)
If with , then
We always assume that is a bounded open subset of with Lipschitz boundary. As in , we introduce the following definition of -total variation.
Definition 2. A function has bounded -total variation in , if where
Remark 3 (see ). (1) If has bounded -total variation in , there is a Radon vector measure on such that
(2) From (32), having bounded -total variation in implies that .
Now, we directly show some lemmas on -total variation from .
Lemma 4. Assume that and in . Then, , and
Lemma 5. Assume that . Then, there exists a sequence such that
By a minor modification of the proof of Lemma 1 in Section 4.3 of , we can have the following lemma.
Lemma 6. Let , and let be the cut-off function defined by Then, , and
3.2. Existence and Uniqueness of the Problem (28)
In this subsection, we show that problem (28) has at least one solution in .
Theorem 7. Let with . Then, the problem (28) admits a unique solution such that
Proof. Let us rewrite
From Remark 1(1), we have
for with . This implies that has a lower bound for all with . Hence, there exists a minimizing sequence for the problem (28).
Step 1. Let us denote and . We first claim that .
In fact, we remark that is decreasing for and increasing for . Therefore, if , one always has Hence, if , we have Moreover, utilizing Lemma 6, we can have Combining (44) and (45), we deduce that On the other hand, in the same way, we get that . Therefore, we can assume that without restriction.
Step 2. Let us prove that there exists such that The proof in the Step 1 implies in particular that is bounded in . Since and , is bounded. Moreover, by the definition of , there exists a constant such that Then, which implies that the sequence is bounded in . Consequently, there exists a function and a subsequence , denoted by itself, such that as , Utilizing Lemma 4 and Fatou’s lemma, we get that is a solution of the problem (28).
Finally, from Remark 1(2), is strictly convex as , and then the uniqueness of the minimizer follows from the strict convexity of the energy functional in (28).
3.3. Comparison Principle
In this subsection, we state a comparison principle for problem (28).
Proposition 8. Let , with , , and . Assume that (resp., ) is a solution of the problem (28) for (resp., ). Then, a.e. in .
Proof. Let us denote , and .
From Theorem 7, It is noticed that there exist the solutions and for and , respectively. Since is a minimizer with data , for , Adding these two inequalities and by a minor modification of the facts in [18, 19], which is as follows: then we can deduce that Writing , we easily deduce that Since , the set is a zero Lebesgue measure, that is, a.e. in .
In this section, by an approach from the theory used in both [16, 20], we define the weak solution of the evolution problems (29)–(31), derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as .
Denote , . Suppose that with , and is a classical solution of (29)–(31) with . Multiplying (29) by , integrating over , we have Since and utilizing the Lagrange mean value theorem, for either or , we have Integrating over for any then yields
On the other hand, let in (59) with , , and . Then left-hand side of the inequality (59) has a minimum at . Hence, if satisfies (59) and with , , and , is also a solution of the problems (29)–(31) in the sense of distribution. Based on this fact which is similar to the ideas in [13, 16, 21], we give the following definition.
In this subsection, we consider the approximating problem where , such that as , From Lemma 5, we can have the existence of .
Lemma 10. Let with . Then, there is a unique solution to the problem which is satisfying where .
Proof. Let us fix and introduce the following functions:
We consider the auxiliary problem as follows:
Since -Laplacian operator is a maximal monotone operator, using Galerkin method and Lebesgue convergence theorem, from standard results for parabolic equations , we get that the problems (72)–(74) admit a unique weak solution such that
Next, let us verify that the truncated function in the problems (72)–(74) can be omitted. Multiply (72) by , where and integrate over to get Then, Therefore, is decreasing in and since we have that for all , and so Then, is also the solution to the problems (61)–(63). Let us fix . Multiplying (61) by , where a similar argument yields that for all , and then (69) follows.
Moreover, multiplying (61) by and integrating it over yield for . Then (70) follows.
Finally, we will verify that the above weak solution will be the pseudosolution for the problems (61)–(63). Suppose that with and . Multiplying (61) by , then integrating by parts over , we obtain Utilizing Young’s inequality with yields for the second term in the left-hand side of the above inequality (84), we obtain Utilizing the Lagrange mean value theorem yields for either or . Returning to (84), we therefore conclude that is, for any . This fact implies that is a pseudosolution for the problems (61)–(63). The uniqueness follows by the uniqueness of the weak solution for the problems (61)–(63).
Step 1. First we fix and pass to the limit .
Let be the pseudosolution solution of (61)–(63). From (69)-(70), we know that, for fixed , is uniformly bounded in and is uniformly bounded in , that is, where is constant. Now we claim that there exists a sequence of functions and a function such that, as ,
In fact, from (91), there is a sequence and a function with such that (92) and (93) hold.
Note that, for any , as , which shows that, for each , By (69) and (70), for each , is a bounded sequence in . Hence, combining this with (98), we get that, for each as , which implies (94).
Since (96) follows.
To see (95), noting that from (100), is equicontinuous, and from (93) and (99), we have, for each , Then, a standard argument yields (95).
From (70) and (92), we also obtain that with . The claim is proved.
Next we show that, for all with and , and for each , By Theorem 11, we obtain that Notice that, using Lemma 4, we have Letting , in (103) and using (92), (94), and (104), we obtain (102) for all with and .
Step 2. Now it only remains to pass to the limit as in (102) to complete the existence of the solution to (29)–(31).
Replacing by in (70), letting (), and using (92)–(95), (104), and (64), we obtain Combining this with (69), we have
Then, by a processing similar to the one for getting (92)–(96), we can obtain a sequence of functions and a function such that, as , ,
Replacing by in (102), letting , , and using Lemma 4, we can have from (107)–(110) for all with and . Then, the existence of the pseudosolution to (29)–(31) is completed.
Moreover, replacing by , letting in (105), and using (107)–(110) and (64), we have
Finally, the uniqueness of pseudosolutions to the problems (29)–(31) follows as in [21, 23]. Let , be two pseudosolutions to (29)–(31) with . Then, we have Adding the above two inequalities, we get for all . This implies that a.e. in .
4.4. Long-Time Behavior
At last, we will show the asymptotic limit of the solution as .
Proof. Take a function with in (59), then
As in , let
Since with , for each , we have that with uniformly bounded in and . Then, there exists a subsequence of and a function , such that as ,
Since is uniformly bounded in , we have
By dividing in (116) and then taking the limit along , we get that, for any with , which implies that is the minimizer of the problem (28).
5. Numerical Methods and Experimental Results
We present in this section some numerical examples illustrating the capability of our model. We also compare it with the known model (AA). In the next two subsections, two numerical discrete schemes, the -total variation (-TV) scheme and the -Laplace approximate (-LA) scheme, will be proposed.
5.1. -Total Variation Scheme
Numerically we get a solution to the problem (28) by computing the associated equation (29) to a steady state. To discretize (29), the finite difference scheme in  is used. Denote the space step by and the time step by . Thus, we have where and is the regularized parameter chosen near 0.
Here the MATLAB function “conv2" is used to represent the two-dimensional discrete convolution transform of the matrix , that is, . Through the above lines, we can obtain by . The program will stop when it achieves our desire.
5.2. -Laplace Approximate Scheme
From the proof of Theorem 12, we can know that the term can be approximated by the term . Based on this, we can use the numerical algorithms for the problems (61)–(63) to get a solution to the problem (28), as . As in , the numerical discrete scheme for the problems (61)–(63) is given as follows: where , is upper limit of the exponent , , and is the iteration time.
5.3. Comparison with Other Methods
In this section, we used the similar way for numerical experiments as . For comparison purposes, some very recent multiplicative noise removal algorithms from the literature are considered, such as the SO Algorithm  (see (16)-(17)) and the AA Algorithm  (see (10)). As recommended in , the stopping rule about SO Algorithm is to reach such that