Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 5130346, 12 pages
http://dx.doi.org/10.1155/2016/5130346
Research Article

A Cartoon-Texture Decomposition Based Multiplicative Noise Removal Method

1School of Mathematics and Statistics, Xidian University, Xi’an 710026, China
2School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China

Received 23 April 2016; Revised 29 June 2016; Accepted 26 July 2016

Academic Editor: Maria L. Gandarias

Copyright © 2016 Chenping Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a new frame for multiplicative noise removal. To improve the multiplicative denoising performance, we add the regularization of texture component in the denoising model, designing a multiscale multiplicative noise removal model. The proposed model is jointly convex and can be easily solved by optimization algorithms. We introduce Douglas-Rachford splitting method to solve the proposed model. In the algorithm, we make full use of some important proximity operators, which have closed expression or can be executed in one time iteration. In particular, the proximity of norm is deduced, which is just the Fourier domain filtering. In the process of simulation experiments, we first analyze and select the needed parameters and then test the experiments on several images using the designed algorithm and the given parameters. Finally, we compare the denoising performance of the proposed model with the existing models, in which the signal to noise ratio (SNR) and the peak signal to noise ratios (PSNRs) are applied to evaluate the noise suppressing effects. Experimental results demonstrate that the designed algorithms can solve the model perfectly and the recovery images of the proposed model have higher SNRs/PSNRs and better visual quality.

1. Introduction

Image denoising is a basic and important task in image processing. The relatively mature developed denoising model is the additive noise model in which case the noise is assumed to obey a Gaussian distribution, that is, However, the noises involved in many applications do not conform to the characteristic of the additive one; they may corrupt an image in other forms. In this paper, we are concerned with the denoising problem under the assumption that the original image has been corrupted by some multiplicative noise. The corresponding examples include the uneven phenomena during the magnetic resonance imaging (MRI) and the speckle noise in the ultrasonic and in synthetic aperture radar (SAR) image. The degradation model of the multiplicative noise can be expressed as where refers to the componentwise multiplication. To model the actual problems, we assume that the multiplicative noise obeys some random distribution; for example, the noise in SAR images is assumed to obey the Gamma distribution and the one in ultrasonic image obeys the Rayleigh distribution.

1.1. Multiplicative Noise Removal Models

There are many multiplicative noise removal models; one of the most classic models based on TV regularization is called RLO model proposed by Rudin et al. [1]:where the mean of noise is assumed to be 1 and the variance is assumed to be . Since (3) is a nonconvex problem, it is very difficult to be solved.

The second classic model is called AA model [2] which is given by Aubert and Aujol under the hypothesis of Gamma distribution: The objective function in (4) is also nonconvex. However, Aubert and Aujol showed the existence of minimizers of the objective function and employed a gradient method to solve (4) numerically. There are several improved works based on (4) developed in recent years.

Recently, Shi and Osher [3] proposed considering a logarithm transformation on the noisy observation, , and derived the TV minimization model for multiplicative noise removal problems. Huang et al. also proposed the log-domain denoising model using the transformation in (4). Consider which is known as EXP model [4]. The objective function in (5) is strictly convex in and the authors promoted an alternating minimization algorithm to solve the model and showed the convergence of the method simultaneously. Nevertheless, model (5) is convex in rather than in in the original image domain. At the same time, Durand et al. [5] proposed a method composed of several stages. They also used the log-image data and applied reasonable suboptimal hard thresholding on its curvelet transform; then they applied a variational method by minimizing a specialized hybrid criterion composed of an data-fidelity term of the thresholded curvelet coefficients and a TV regularization term in the log-image domain. The restored image can be obtained by using an exponential of the minimizer, weighted in such a way that the mean of the original image is preserved. Their restored images combine the advantages of shrinkage and variational methods. Besides the above approaches, dictionary learning methods and nonlocal mean methods have also been proposed and developed for multiplicative denoising [69].

In [10], Zhao et al. developed a convex optimization model for multiplicative noise removal. The main idea is to rewrite a multiplicative noise equation such that both the image variable and the noise variable are decoupled. That is to say, rewrite problem (2) as where is a diagonal matrix of which the main diagonal entries are given by . According to (2), when there is no noise in the observed image, we obtain that , a vector of all ones. When there is a multiplicative noise in the observed image, we expect that , and moreover they are greater than zeros. So we can say that is invertible, and (6) is equivalent to where is the diagonalizable matrix of vector and . The convex model to solve (2) is proposed as

In (8), the first term is to measure the variance of , the second term is the data term, and the third term is the TV regularization term. If the first term is absent and can be arbitrarily assigned, then minimizing may lead to the trivial solution , where is a constant. So the existence of variance term can promise a proper minimizer of (8) [10]. Furthermore, It is worth noting that the image variable and the noise variable are decoupled, and the model is jointly convex for , which make the model able to be easily solved by optimizing methods.

1.2. The Contribution

We firstly analyze the performance of (8). It is observed that, if we fix and set , the minimizing problem (8) is just a TV-L1 problem, denoted as In [11], it was pointed out that minimizing (10) can extract the large scale component and leave out the small scale contents of image . In other words, we can extract cartoon component with different scales based on the selection of parameter . We verify the above conclusion through the following experiment. The example is shown in Figure 1(a), an image with four squares with size of , , , and pixels, respectively. The experiment results are the following: when , all the four squares can be extracted, as shown in Figure 1(b); when , three squares can be extracted while the one of is lost, as shown in Figure 1(c); when , the two big squares appear, and another two are lost, as shown in Figure 1(d); when , only the biggest square remains, while all other ones are lost, as shown in Figure 1(e). The above results are consistent with the theoretical analysis in [11]; the bigger is, the bigger the scale of extraction is. As a result, we have sound reasons to believe that the minimizer of model (10) is almost the cartoon component of the whole restored image we expect.

Figure 1: Numerical experiment results of (10) with different : (a) is the original image ; (b)–(e) are the minimizer with , respectively.

The above analysis shows that, if we want to recover more contents from an image , we should choose a small , which bring much more small scales and details. However, for a noisy image, the small scale component may contain much more noise, which may hinder the denoising quality. For this reason, we will improve the denoising model through adding the prior information of the texture component and design a cartoon-texture decomposition based denoising model. Then, we design the numerical algorithm based on the Douglas-Rachford splitting algorithm and three useful proximity operators. Before conducting the experiment, we analyze the selection of the corresponding parameters. Finally, we verify the performance of our proposed model through comparing the SNRs/PSNRs indexes with some existing models.

The rest of the paper is organized as follows. We propose our new model in next section. In Section 3, the numerical method is presented, in which the Douglas-Rachford (DR) splitting algorithm and some proximal operators used in this paper are firstly reviewed, and the algorithm is then designed and described in detail. In Section 4, the denoising experiments on two types of multiplicative noise are implemented, and the experiment results and performance analyses are unfolded. Finally, we conclude our work in Section 5.

2. The Proposed Model

Based on the analysis in Section 1.2, the recovery image obtained by solving (8) mainly contains the cartoon component of what we expect, while most of the significant texture components are lost. To optimize the denoising performance, we improve model (8) by adding the texture information and alter the regularization term to be , where denotes the cartoon component of the restored image . Taking into consideration the prior information of texture , we select as the regularization term about , for that minimizing can extract the texture component very well. In a word, the new model is and the restored image is . In (11), , , and are the positive regularization parameters to control the balance among the terms in the objective function.

The proposed model possesses the following superiorities. Firstly, we add the term in the regularization part, which can help to recover more image information and improve the quality of the restored image. Secondly, and can be adjusted according to the noise level, which makes the model have multiscale property. Thirdly, when , then , and (11) degrades to model (8); that is, (11) is the modified and improved version of (8). Finally, the proposed model keeps the structure of (8), is jointly convex for , and can be easily solved by optimal methods. In the following section, we use alternative iteration method and Douglas-Rachford method to deal with it, which combine the application of several proximity operators.

3. The Numerical Method

3.1. Douglas-Rachford Algorithm and Some Proximity Operators

Before proposing the numerical scheme of (11), we will review some basic algorithms and operators. The first one is the Douglas-Rachford splitting algorithm.

Definition 1. Let and be the proper, l.s.c., and convex functions such that and as ; then the problem admits at least one solution and, for any , its solutions are characterized by Algorithm 2.

Algorithm 2 (Douglas-Rachford algorithm). Consider the following: Fixing ,For     ,end forwhere the operator will be defined in the following.

Secondly, we review the definition of proximity operator and present several special examples that will be used in the numerical scheme.

Definition 3. Let be a proper, l.s.c., and convex function; for every , the minimization problem admits a unique solution, which is denoted by , and the operator thus defined is called the proximity operator of .

In the following, we list three examples of proximity operator which will be used in this paper:

. Set ; then can be componentwise expressed in the close form: which is known as the soft thresholding operator, and we denote it in short by .

. Computing is equivalent to solving the TV-L2 problem There are many strategies to solve it. One of the most classical methods is the semi-implicit gradient descent algorithm proposed by Chambolle in [12]; another is the Forward-Backward method developed in [5]. Both of the above methods need inner loop in the processing of numerical scheme. In this paper, we adopt the variable splitting method and express the problem in discrete form: where is the gradient operator given by We introduce an auxiliary variable to transfer : then we get the following approximation model to : with a sufficiently large penalty parameter . We solve (19) by an alternating minimization scheme given in Algorithm 4.

Algorithm 4 (proximity of TV). Consider the following:
Fixing , compute componentwise, Fixing , compute It is worth mentioning that the above alternating minimization scheme is embedded into the whole algorithm with just one time iteration, which makes the algorithm have no inner iteration.

. We will deduce in the continuous situation, with the purpose of describing the derivation process clearly; this does not affect the processing of the numerical implementation. The proximity of is expressed as Denote the objective function in (20) by while minimizing is equivalent to minimizing in Fourier domain: minimizing in yields the unique solution , where Taking inverse Fourier transform, we have It is indicated from (24) that the proximity of norm is just the Fourier domain filtering.

3.2. Solving the Proposed Model Numerically

We solve (11) by alternatively updating in turn. The framework is given as in Algorithm 1.

Algorithm 1

In Algorithm 1, updating is to compute the proximity of , which is just the soft thresholding operator and can be realized by (15). Updating is to solve a TV-L1 problem, and there are many methods to deal with it; to improve the algorithm efficiency, we realize the updating using Douglas-Rachford splitting algorithm, combining the computing of and . Updating is to solve an L1- problem, which is also dealt with by Douglas-Rachford splitting algorithm, combining the soft thresholding and Fourier domain filtering. A detailed description of the whole process is shown as in Algorithm 2.

Algorithm 2

There are some issues that need to be illustrated.

Remark 5. For step   in Algorithm 1, when we update as , we realize it by Algorithm 4 in one time iteration, which can avoid the inner loop and can improve the efficiency of the algorithm.

Remark 6. In this paper, the stopping criterion is used as follows:

4. Implementation Details and Experimental Results

This section is mainly devoted to numerical simulation of image restoration in the presence of multiplicative noise. We test the performance of our model under the corruption of two types of multiplicative noise: Gamma and Rayleigh.

Gamma Noise. The probability density function of Gamma noise is given by where . The mean and the variance of are As multiplicative noise of an image, the mean of is set to be 1, so , and the variance of is .

In model (10), , and the value of its mean is estimated as [10]

Rayleigh Noise. The probability density function of Rayleigh noise is given by where is a positive parameter. The mean value of is equal to , the variance of is equal to , and the mean value of the inverse of Rayleigh noise can be estimated as [10]

We test experiments on three images: Cameraman (size of ), Lena (size of ), and Aerial image of some city (size of ); see Figure 2.

Figure 2: The original images: (a) Cameraman, (b) Lena, and (c) Aerial image of some city.

In this paper, we use the signal to noise ratio (SNR) and the peak signal to noise ratio (PSNR) as the evaluation indicators. These indicators are measured between clean and restored images. Let represent the image restored from the noisy image , and let represent the average variance of the clean image . Then, we define the SNR indicator by the formula as follows: and the PSNR is defined as where denotes the length of the images.

4.1. Parameters Selection

Some necessary parameters , , , , , , and are needed to be given to start up the new algorithm; , , and are regularization parameters to keep balance among the terms in model (11). In particular, is an important indicator to trade off between the data term and the regularization term. and are two relaxation parameters of the Douglas-Rachford splitting algorithms in Algorithm 2 and are empirically set to be . and are the parameters of the proximity operator of and , respectively, and are empirically set to be .

According to the experimental analysis in [10], the ratio in (8) is almost a constant depending on the type of noise and was empirically estimated to be , for Gamma noise and Rayleigh noise, respectively. From a number of experiments, we find that the best value of can be chosen from the range , and the exact selection depends on the different noise levels and images. We give the values of in Table 1.

Table 1: PSNR and SNR results for the Gamma noise removal.

For , it is a key parameter to promote the performance of model (11). Since a proper value of is critical to the recovery of a satisfying image, it is necessary for us to emphatically discuss its impact on the denoising results. Here, we show its influences on image decomposition by using the image Cameraman. Fixing other parameters, we decompose the image into cartoon component and texture component with different and put the experimental results in Figure 3. It is clear from Figure 3 that the different selections of can bring completely different results. If is selected too large (), there are very few contents in the texture , and many details are found in . With the decrease of , more and more features get involved in the texture component . However, when is selected too little (e.g., ), the block effect in cartoon component is very serious, and some major features are included in the texture component . Moreover, in the process of denoising, the too little value of may bring much more details or small scale parts appearing in , including some noise we do not expect. For this reason, we experimentally select in the interval according to the different noise level, and the exact selection can be found in Table 1.

Figure 3: The influence of on decomposition. (a) is image . (b–e) are the cartoon ’s with different selection of , which correspond to , and , respectively. (f–i) are the texture ’s, which correspond to the ’s on (b–e).
4.2. Experimental Results in the Assumption of Gamma Noise

In the assumption of Gamma noise corruption, we test denoising performances of AA model, convex model, and the proposed model on the images of Figure 2 and compare the SNRs/PSNRs of three methods. The AA model is formulated as which is solved by the gradient descent method where is step size and is set as for all experiments and is the regularization parameter and is adjusted according to the images and noise level; the exact value can be found in Table 1. The convex model is solved by the ADMM method; see [10].

In the test, the original images are corrupted by Gamma noise with , , and , respectively. In Table 1, we list values of related parameters and obtained SNRs/PSNRs by taking average of ten noise cases under the same experimental setting. It is clear that the proposed model performs quite better in terms of PSNR values and SNR values than AA model and convex model.

We further show the denoised results obtained by three methods in Figures 46. It is observed that when and , as shown in Figures 4 and 5, the restored images from the new model possess more clear contents and much better visual effects than the AA model and the convex model. However, when , that is, the noise level is higher, we can observe from Figure 6 that the new model possesses little advantage over the convex model in the visual effect. We also observe from Table 1 that, when , the SNRs/PSNRs of our model and the convex model are neck to neck. The main reason for this phenomenon is that the larger the variance of the noise is, the more the texture components are seriously destroyed and regarded as noise part, and the bigger value should be adjusted. As described in the analysis (c) of the new model, there will be a little content in the parts, and denoising results of the new model will approximate that of the convex model.

Figure 4: Experiment results under the corruption of Gamma noise, .
Figure 5: Experiment results under the corruption of Gamma noise, .
Figure 6: Experiment results under the corruption of Gamma noise, .
4.3. Experimental Results in the Assumption of Rayleigh Noise

The Rayleigh noise is generated by , where is a uniformly distributed random variable generated by “rand” in MATLAB. The mean of Rayleigh noise is set to be 1, and the variance is .

Since AA model is deduced under the assumption of Gamma noise, we just test the denoising performance of the convex model and the proposed model. Table 2 shows the SNRs/PSNRs results of the two models. we notice that the proposed model gets higher SNRs and PSNRs than the convex model. The experiment results are shown as in Figure 7. It can be observed that the restored images from the new model possess more clear contents and much better visual effects.

Table 2: PSNR and SNR results for the Rayleigh noise removal.
Figure 7: Experiment results under the corruption of Rayleigh noise with the mean and the variance .
4.4. Further Analysis of the Restored Images

In this section, we further analyze the cartoon part and the texture part of the restored image of model (11). Take the case under Gamma noise as an example; we exhibit the experiment results when .

It can be observed from Figure 8 that there are rich contents in the texture parts , which enhance the image quality in both the SNRs/PSNRs index and the visual effects. So, the proposed model can also be regarded as a cartoon-texture decomposition method under the multiplicative noise corruption, on which we need further research.

Figure 8: Experiment results of the proposed model. From (a–l), (a, e, i) are the noisy images corrupted by Gamma noise (), (b, f, j) are the restored cartoon parts, (c, g, k) are the restored texture parts, and (d, h, l) are the whole recovery images.

5. Conclusions

A novel cartoon-texture decomposition based multiplicative noise removal model has been proposed in this paper. The main advantage of our work is embodied in the following two aspects: the one in which we take most of the a priori information of texture component into the denoising model, which makes the model possess more flexibility and efficiency. Another is that we dexterously solve the new convex model using optimal splitting algorithms and Fourier domain filtering method. The experiment results demonstrate that the proposed model can better remove the noise than the two other models, and the recovery images have higher SNRs/PSNRs and better visual effects.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China under Grant nos. 61472303, 61271294, 61362129, and 61379030 and by the Fundamental Research Funds for the Central Universities (no. NSIY21).

References

  1. L. Rudin, P.-L. Lions, and S. Osher, “Multiplicative denoising and deblurring: theory and algorithms,” in Geometric Level Set Methods in Imaging, Vision, and Graphics, pp. 103–119, Springer, New York, NY, USA, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  2. G. Aubert and J.-F. Aujol, “A variational approach to removing multiplicative noise,” SIAM Journal on Applied Mathematics, vol. 68, no. 4, pp. 925–946, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. J. Shi and S. Osher, “A nonlinear inverse scale space method for a convex multiplicative noise model,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 294–321, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. Y.-M. Huang, M. K. Ng, and Y.-W. Wen, “A new total variation method for multiplicative noise removal,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 20–40, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  5. S. Durand, J. Fadili, and M. Nikolova, “Multiplicative noise removal using L1 fidelity on frame coefficients,” Journal of Mathematical Imaging and Vision, vol. 36, no. 3, pp. 201–226, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. Hao, X. Feng, and J. Xu, “Multiplicative noise removal via sparse and redundant representations over learned dictionaries and total variation,” Signal Processing, vol. 92, no. 6, pp. 1536–1549, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. Y.-M. Huang, L. Moisan, M. K. Ng, and T. Zeng, “Multiplicative noise removal via a learned dictionary,” IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4534–4543, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. T. Teuber and A. Lang, “Nonlocal filters for removing multiplicative noise,” in Scale Space and Variational Methods in Computer Vision, A. M. Bruckstein, B. M. ter Haar Romeny, A. M. Bronstein, and M. M. Bronstein, Eds., vol. 6667 of Lecture Notes in Computer Science, pp. 50–61, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  9. F. Dong, H. Zhang, and D.-X. Kong, “Nonlocal total variation models for multiplicative noise removal using split Bregman iteration,” Mathematical and Computer Modelling, vol. 55, no. 3-4, pp. 939–954, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. X.-L. Zhao, F. Wang, and M. K. Ng, “A new convex optimization model for multiplicative noise and blur removal,” SIAM Journal on Imaging Sciences, vol. 7, no. 1, pp. 456–475, 2014. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. W. Yin, D. Goldfarb, and S. Osher, “The total variation regularized model for multiscale decompostion,” Multiscale Modeling and Simulation, vol. 6, pp. 190–211, 2007. View at Google Scholar
  12. A. Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 89–97, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus