Mathematical Problems in Engineering

Volume 2016, Article ID 5130346, 12 pages

http://dx.doi.org/10.1155/2016/5130346

## A Cartoon-Texture Decomposition Based Multiplicative Noise Removal Method

^{1}School of Mathematics and Statistics, Xidian University, Xi’an 710026, China^{2}School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China

Received 23 April 2016; Revised 29 June 2016; Accepted 26 July 2016

Academic Editor: Maria L. Gandarias

Copyright © 2016 Chenping Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose a new frame for multiplicative noise removal. To improve the multiplicative denoising performance, we add the regularization of texture component in the denoising model, designing a multiscale multiplicative noise removal model. The proposed model is jointly convex and can be easily solved by optimization algorithms. We introduce Douglas-Rachford splitting method to solve the proposed model. In the algorithm, we make full use of some important proximity operators, which have closed expression or can be executed in one time iteration. In particular, the proximity of norm is deduced, which is just the Fourier domain filtering. In the process of simulation experiments, we first analyze and select the needed parameters and then test the experiments on several images using the designed algorithm and the given parameters. Finally, we compare the denoising performance of the proposed model with the existing models, in which the signal to noise ratio (SNR) and the peak signal to noise ratios (PSNRs) are applied to evaluate the noise suppressing effects. Experimental results demonstrate that the designed algorithms can solve the model perfectly and the recovery images of the proposed model have higher SNRs/PSNRs and better visual quality.

#### 1. Introduction

Image denoising is a basic and important task in image processing. The relatively mature developed denoising model is the additive noise model in which case the noise is assumed to obey a Gaussian distribution, that is, However, the noises involved in many applications do not conform to the characteristic of the additive one; they may corrupt an image in other forms. In this paper, we are concerned with the denoising problem under the assumption that the original image has been corrupted by some multiplicative noise. The corresponding examples include the uneven phenomena during the magnetic resonance imaging (MRI) and the speckle noise in the ultrasonic and in synthetic aperture radar (SAR) image. The degradation model of the multiplicative noise can be expressed as where refers to the componentwise multiplication. To model the actual problems, we assume that the multiplicative noise obeys some random distribution; for example, the noise in SAR images is assumed to obey the Gamma distribution and the one in ultrasonic image obeys the Rayleigh distribution.

##### 1.1. Multiplicative Noise Removal Models

There are many multiplicative noise removal models; one of the most classic models based on TV regularization is called RLO model proposed by Rudin et al. [1]:where the mean of noise is assumed to be 1 and the variance is assumed to be . Since (3) is a nonconvex problem, it is very difficult to be solved.

The second classic model is called AA model [2] which is given by Aubert and Aujol under the hypothesis of Gamma distribution: The objective function in (4) is also nonconvex. However, Aubert and Aujol showed the existence of minimizers of the objective function and employed a gradient method to solve (4) numerically. There are several improved works based on (4) developed in recent years.

Recently, Shi and Osher [3] proposed considering a logarithm transformation on the noisy observation, , and derived the TV minimization model for multiplicative noise removal problems. Huang et al. also proposed the log-domain denoising model using the transformation in (4). Consider which is known as EXP model [4]. The objective function in (5) is strictly convex in and the authors promoted an alternating minimization algorithm to solve the model and showed the convergence of the method simultaneously. Nevertheless, model (5) is convex in rather than in in the original image domain. At the same time, Durand et al. [5] proposed a method composed of several stages. They also used the log-image data and applied reasonable suboptimal hard thresholding on its curvelet transform; then they applied a variational method by minimizing a specialized hybrid criterion composed of an data-fidelity term of the thresholded curvelet coefficients and a TV regularization term in the log-image domain. The restored image can be obtained by using an exponential of the minimizer, weighted in such a way that the mean of the original image is preserved. Their restored images combine the advantages of shrinkage and variational methods. Besides the above approaches, dictionary learning methods and nonlocal mean methods have also been proposed and developed for multiplicative denoising [6–9].

In [10], Zhao et al. developed a convex optimization model for multiplicative noise removal. The main idea is to rewrite a multiplicative noise equation such that both the image variable and the noise variable are decoupled. That is to say, rewrite problem (2) as where is a diagonal matrix of which the main diagonal entries are given by . According to (2), when there is no noise in the observed image, we obtain that , a vector of all ones. When there is a multiplicative noise in the observed image, we expect that , and moreover they are greater than zeros. So we can say that is invertible, and (6) is equivalent to where is the diagonalizable matrix of vector and . The convex model to solve (2) is proposed as

In (8), the first term is to measure the variance of , the second term is the data term, and the third term is the TV regularization term. If the first term is absent and can be arbitrarily assigned, then minimizing may lead to the trivial solution , where is a constant. So the existence of variance term can promise a proper minimizer of (8) [10]. Furthermore, It is worth noting that the image variable and the noise variable are decoupled, and the model is jointly convex for , which make the model able to be easily solved by optimizing methods.

##### 1.2. The Contribution

We firstly analyze the performance of (8). It is observed that, if we fix and set , the minimizing problem (8) is just a TV-L1 problem, denoted as In [11], it was pointed out that minimizing (10) can extract the large scale component and leave out the small scale contents of image . In other words, we can extract cartoon component with different scales based on the selection of parameter . We verify the above conclusion through the following experiment. The example is shown in Figure 1(a), an image with four squares with size of , , , and pixels, respectively. The experiment results are the following: when , all the four squares can be extracted, as shown in Figure 1(b); when , three squares can be extracted while the one of is lost, as shown in Figure 1(c); when , the two big squares appear, and another two are lost, as shown in Figure 1(d); when , only the biggest square remains, while all other ones are lost, as shown in Figure 1(e). The above results are consistent with the theoretical analysis in [11]; the bigger is, the bigger the scale of extraction is. As a result, we have sound reasons to believe that the minimizer of model (10) is almost the cartoon component of the whole restored image we expect.