Abstract

In this paper, we first propose restarted homotopy perturbation methods (RHPM) for multiplicative noise removal of the RLO and AA2 models. The main difficulty in applying the RHPM to the nonlinear denoising problem is settled by using binomial series techniques. We next propose the split Bregman methods for multiplicative noise removal of the RLO and AA2 models. The difficulty in applying the split Bregman method to the nonlinear denoising problem can be handled by transforming ill-conditioned linear systems into well-conditioned linear systems using splitting techniques of singular matrices. Lastly, numerical experiments for several test problems are provided to demonstrate the efficiency and reliability of the RHPM and split Bregman methods.

1. Introduction

Image denoising which is one of the fundamental problems in image processing is to recover an original image from a given noisy image. Let be an open rectangular domain in , let be an observed noisy image to be denoised, and let be the original unknown image to restore. The mathematical models of noisy images can be classified into additive or multiplicative ones according to the relations of noises and clean images, i.e.,where and are unknown additive and multiplicative noise functions, respectively. In this paper, we focus on multiplicative noise removal for the observed images corrupted by multiplicative Gaussian white noise or Gamma noise with mean equal to one.

There are a lot of investigations for additive model (1) using the following general nonlinear variational model [1, 2]:where the first term of the right-hand side is a regularizer in general form to ensure the smoothness of the desired image with edge preserving, the second term is a data fitting term which reflects the fidelity between the original image and observed noisy image, and is a regularization parameter.

Many approaches have been proposed to remove the additive noise, like variational approaches [3, 4], the stochastic approach [5, 6], wavelet approaches [7, 8], framelet approaches [9, 10], and RHPM approaches [11], to name a few. It is well known that the model introduced by Rudin, Osher, and Fatemi [12] (called the ROF model) is a popular model for image denoising due to its property of preserving edges. If in (2), then (2) reduces to the ROF model using Total Variation (TV): Since the images restored by the ROF model are in the space of bounded variation functions, the ROF model preserves sharp edges or object boundaries, and it is especially effective on restoring the piecewise constant images. However, the ROF model does not preserve the details and textures since it yields staircase effects.

Multiplicative noise models occur in the study of several coherent imaging systems, such as synthetic aperture radar (SAR) and sonar (SAS), ultrasound or laser imaging, and magnetic resonance imaging (MRI). The researches on variational models of additive noise removal have been successful, but the investigations on variational models of multiplicative noise removal have not been done much. The difference between a variational model of additive and that of multiplicative noise removal is that the data fitting term depends on noise distributions, such as Gamma, Poisson, Rayleigh, and Gauss distributions due to different means of image acquisition. SAR images are Gamma distributions [13], medical ultrasound or resonance images fit Rayleigh distributions [14], and astronomical microscopy images and medical SPECT/PET images fit Poisson distributions [15, 16]. If the multiplicative noise is not too strong, the abovementioned noise can be considered approximately to the Gauss distributions [17].

For the multiplicative Gaussian white noise with mean equal to one, Rudin, Lions, and Osher [17] proposed the following variational diffusion model (called the RLO model):where and are weighted parameters.

For the multiplicative Gamma noise with mean equal to one, Jin and Yang [18] applied the exponential transformation introduced by Huang et al. [19] with the fitting term of the AA model [13] and proposed the following variational model (called the AA2 model in this paper):where is a weighted parameter.

The investigations on the algorithm design of the classic TV model have attracted a lot of interest since it was invented to improve its computation efficiency and the quality of restored images, such as the artificial time marching method [12], fixed-point iterative method [20], primal-dual method [21], dual method [22], split method [23], Bregman iterative method [24], split Bregman method [25], and unbiased Box-Cox transformation method [26]. Among them, the split Bregman method combines the advantages of the split method in easy implementation and the Bregman iterative method in good quality of restored images. The variational models of multiplicative noise removal are more complex than the corresponding ones of additive noise removal, but its previous studies focused on models according to different noise distributions. Some researchers pay their attention to algorithm design, such as [12, 20, 27, 28]. The starting point of our investigation is the RLO model of (4) and AA2 model of (5) with the TV regularization term. The purpose of this paper is to propose restarted homotopy perturbation methods (RHPM) and split Bregman methods for multiplicative noise removal models (4) and (5).

The paper is organized as follows. In Section 2, we review the TM (time marching) method for multiplicative noise removal models. In Section 3, we briefly review a restarted homotopy perturbation method (RHPM). In Section 4, we propose RHPM algorithms for the multiplicative noise removal models. In Section 5, we propose split Bregman algorithms for the multiplicative noise removal models. In Section 6, we provide numerical experiments for several test problems in order to evaluate the performance of the RHPM and split Bregman methods. Lastly we provide concluding remarks.

2. Review of the TM Method for Multiplicative Noise Removal

The Euler-Lagrange equations corresponding to RLO model (4) and AA2 model (5) lead to the following nonlinear elliptic PDEs with the homogeneous Neumann boundary conditions, respectively: where is the unit normal vector exterior to the boundary .

We briefly describe the TM (time marching) method [12] only for AA2 model (7). To avoid division by zero in numerical implementation, we replace the nondifferentiable term in (7) with a smooth approximation term Then (7) is transformed intoIn order to apply the TM method to (9), we consider the following time-dependent nonlinear parabolic PDE corresponding to (9): where is given.

For numerical implementation, let us assume that the domain has been split into cells where the grid points are located at , kt, where and denote the time step and iteration number, respectively. We denote the values of at the grid points by and .

Without loss of generality, we can assume that . Then, nonlinear PDE (10) can be approximated by the following finite difference formula:where for with

In a similar way as was done for the AA2 model, we can obtain the following formula for the RLO model:where and the boundary conditions are defined the same as those in the AA2 model.

Hence, we obtain TM Algorithms 1 and 2 for the AA2 and RLO models. denotes the maximum number of iterations, denotes the noisy image, denotes the restored image at the th iteration, and MAE denotes the mean absolute error at the th iteration, i.e., where stands for the -norm.

1: MAE, for
2: for   to   do
3:Compute using Equation (12) for
4:Compute using Equation (11) for
5:if    then
6:Stop
7:end if
8: end for
1: MAE, for
2: for   to   do
3:Compute using Equation (12) for
4:Compute using Equation (14) for
5:if    then
6:Stop
7:end if
8: end for

3. Review of the Restarted Homotopy Perturbation Method (RHPM)

We briefly describe the homotopy perturbation method (HPM) for solving nonlinear partial differential equationwith the boundary conditions where is a general differential operator, is a boundary operator, is a known analytic function, and is the boundary of the domain . The operator can be divided into two parts and , where is linear and is nonlinear. Therefore, (16) can be written asBy using the homotopy technique, one can construct a homotopy which satisfies or, equivalently,where is an embedding parameter and is the initial approximation of (16) which satisfies the boundary conditions. Clearly, we have The changing process of from zero to unity is just that of changing from to . This is called deformation and also and are called homotopic in topology. If the embedding parameter is considered as a "small parameter" applying the classical perturbation technique, then the solution of (20) can be expressed as a power series in , that is,Setting , the solution of (16) can be expressed as the sum of an infinite series

Generally speaking, the computation of in the HPM becomes more complicated as increases. To circumvent this problem, the restarted homotopy perturbation method (RHPM) proposed by Han and Yun [11] repeats the HPM process by computing only the first few terms instead of computing infinite terms of . This is a big advantage of the RHPM as compared with the HPM. The general procedure of the RHPM for solving a nonlinear equation is described below (see [11] for more details).

Let be an initial approximation of a nonlinear equation. First, we compute the first -th term approximation in the following way: Following the above computational process with as an initial approximation, a new approximation is computed as follows: By repeating the above process, a new approximation can be computed from the initial approximation , which was obtained at the th step, in the following way: Notice that are coefficients of , respectively. Then the exact solution can be approximated by , that is, under some appropriate conditions . The RHPM which computes only vectors every step is called the RHPM() method.

4. RHPM Algorithms for Multiplicative Noise Removal

In this section, we describe an application of the RHPM to multiplicative denoising problems of the RLO and AA2 models. Before doing this, we describe the binomial series which is used for an application of the RHPM. The difficulty in applying the RHPM to the TV-based image denoising problem comes from (36) and (62) (i.e., , , , and ). In order to handle this difficulty, the following binomial series is used: for ,If in (27), thenLetting in (28), for    can be approximated byIf is large, then, by introducing a small parameter and using (29),Similarly, if and in (27), then, by introducing a small parameter ,Note that a small parameter in (30) and (31) is used to guarantee convergence of the binomial series when is large.

We first describe how to apply the RHPM to the following time-dependent nonlinear parabolic PDE corresponding to AA2 model (7): For simplicity of exposition, we describe the computational process for RHPM(2) (i.e., ). We use three terms of binomial series (27) for numerical computation of (36) and (62), so division by zero does not occur. Let the operators , , and be defined as where From (20), one obtainswhereUsing the Taylor series expansion of with the second order and power series expansion of of form (22), can be approximated asSince the solution of (35) is of form (22), by simple computation From these equations, one easily obtains whereSubstituting into (30) and using (39), one obtains where and .

Hence, the first term of the right-hand side in (36) can be written asSubstituting into (31) and using (39), one obtains where and .

Hence, the second term of the right-hand side in (36) can be written asFrom (36), (37), (42), and (44), (35) can be expressed asComparing coefficients of , , and in (44), Notice that the initial approximation is set to which is a noisy image. Applying the inverse operator of to (47), one obtainswhere From (49), the partial derivatives of areUsing (51), , and can be expressed aswhere Substituting (49) and (52) into (48) and applying the inverse operator of to (48), one obtainswhere Now update whose new value is set to . By repeating the aforementioned process with the new updated , we obtain the RHPM(2) method for the AA2 model.

For numerical implementation of the RHPM(2) method, let us assume that the domain has been split into cells where the grid points are located at , . Let kt, where and refer to the time step and iteration number, respectively. We denote the values of at the grid points by . Without loss of generality, we can assume that . For simplicity of exposition, we define the following notation: The boundary conditions are defined the same as those in the TM method. Assume that and has been computed for all and . Then and discretization of (49) is given bywhere Also, discretization of (54) is given bywhere

We now compute . Repeating the above process with as an initial approximation, we can obtain the RHPM(2) method, called Algorithm 3, for multiplicative noise removal of the AA2 model.

1: MAE, for all and
2: for   to   do
3:Set
4:Compute using (57) for all and
5:Compute using (59) for all and
6:
7:if    then
8:Stop
9:end if
10: end for

We next describe how to apply RHPM(2) to the following time-dependent nonlinear parabolic PDE corresponding to RLO model (6): where is given. Let the operators , , and be defined asIn order to handle the difficulty in applying the RHPM to the RLO model, we use three terms of binomial series (27) for and in (62). That is, for a small parameter , and can be approximated by

Using (62) and (63), the RHPM(2) method for the RLO model can be derived in a similar way as was done for the AA2 model as follows: where , and are defined the same as the AA2 model and , and are defined as Comparing coefficients of , , and in (64),Applying the inverse operator of to (67), one obtainswhere is the constant term and is the coefficient of .

Substituting (69) and (52) into (68) and applying the inverse operator of to (68), one obtainswhere and are coefficients of and , respectively.

For numerical implementation of the RHPM(2) method, assume that and has been computed for all and . Then , and discretizations of (69) and (70) are given by where

We now compute . Repeating the above process with as an initial approximation, we can obtain the RHPM(2) method, called Algorithm 4, for multiplicative noise removal of the RLO model.

1: MAE, for all and
2: for   to   do
3:Set
4:Compute using (71) for all and
5:Compute using (72) for all and
6:
7:if    then
8:Stop
9:end if
10: end for

From RHPM(2) Algorithms 3 and 4, we can easily obtain RHPM(1) Algorithms 5 and 6 whose computational costs per iteration are much cheaper than those of Algorithms 3 and 4. In Algorithms 36, , , and denote the maximum number of iterations, the noisy image, and the restored image at the th iteration, respectively. Notice that all images are assumed to have an intensity range of to guarantee numerical stability of the RHPM methods.

1: MAE, for all and
2: for   to   do
3:Set
4:Compute using (57) for all and
5:
6:if    then
7:Stop
8:end if
9: end for
1: MAE, for all and
2: for   to   do
3:Set
4:Compute using (71) for all and
5:
6:if    then
7:Stop
8:end if
9: end for

5. Split Bregman Algorithms for Multiplicative Noise Removal

From (4) and (5), the general variational models based on TV regularization for multiplicative noise removal can be written as

To derive the alternating split Bregman algorithm for general multiplicative noise removal problem (74), we first introduce an auxiliary vector so that and minimize the following unconstrained problem:where and is a penalty parameter. Then unconstrained minimization problem (75) can be solved by the following split Bregman algorithm using an auxiliary vector :where and is set to (noisy image).

Minimizing (76) alternatingly with respect to and , one obtains the alternating split Bregman algorithm introduced in [25]:

The Euler-Lagrange equations corresponding to (78) and (79) are as follows:where , , and .

First we consider how to solve (80) for . For the case of RLO model (4), and thus To avoid division by zero or near-zero for and , and are approximated using four terms of binomial series (27). That is, where need to be scaled so that the value of lies between and to guarantee convergence of the binomial series. Thus can be approximated as From this approximate equation, in (80) can be approximated as Substituting this approximate equation into (80), one obtains the following approximate PDE for the RLO model:

For the case of AA2 model (5), . Using four terms of the Taylor series for , can be approximated as

From this approximate equation, in (80) can be approximated as Substituting this approximate equation into (80), we can obtain the following approximate PDE for the AA2 model:

Now we consider the numerical approach for solving PDE problems of RLO model (86) and AA2 model (89). Let , , , and be the components of , , , and , respectively, where and . For an array , let denote a long vector of size which is defined by and . Let be a discrete gradient operator defined by Then the discrete gradient operator can be represented by a matrix operator of size : where denotes the Kronecker product, denotes the identity matrices of order , and denotes the following matrix of order : Using the matrix operator , it is easy to show that and in (86) and (89) can be approximated as Hence, we obtain the finite difference approximate equations for RLO model (86) and AA2 model (89) which are given by the following linear systems, respectively:where , , and denote elementwise exponentiations, denotes the restored image at the th iteration, all multiplications of two arrays denote elementwise multiplications, and is an diagonal matrix whose diagonal elements are the same as .

Next, we consider how to solve (81) for . Equation (81) can be expressed aswhere . By squaring and adding two equations in (96), one can obtain which impliesSubstituting (98) into (96), we have a generalized shrinkage formulaGeneralized shrinkage formula (99) can be also found in [25, 29]. To make this paper self-contained, we provided a detailed proof for (99). Numerical implementation for formula (99) can be done as follows: for each -component of , where .

Lastly, numerical implementation for (77) can be done as follows:

Linear systems (94) and (95) are singular or ill-conditioned problems, so direct or iterative methods fail to get an approximate solution for these problems. For this reason, we propose a new technique for solving linear systems (94) and (95). Let

Then linear systems (94) and (95) can be rewritten asIn order to solve ill-conditioned linear systems (103) for , we propose the following iterative method:

Algorithm: SOLVER()

 Choose

for to

  Solve for

end for

In it or , refers to the restored image computed at the previous th step, and is a parameter which should be chosen so that the coefficient matrix is well conditioned. To study semiconvergence of SOLVER(), we first introduce some important properties for an iterative method.

Theorem 1 (see [30, 31]). Let be a symmetric positive semidefinite matrix and . Let with nonsingular and let . If is symmetric positive definite, then is semiconvergent, and an iterative method , or equivalently , is semiconvergent. In other words, generated by the iterative method converges to some solution to for each .

Lemma 2. Let and . Then are symmetric positive semidefinite, and and are symmetric positive definite.

Proof. It can be easily shown that all diagonal elements of are positive. Hence all properties of this lemma clearly hold from the assumptions.

Theorem 3. Let and . Then generated by SOLVER() converges to a solution to .

Proof. Note that . Let and . From Lemma 2, is symmetric positive semidefinite, and and are symmetric positive definite. Hence, is nonsingular and is symmetric positive definite. From Theorem 1, the iterative method , which is equivalent to , is semiconvergent. Hence the proof is complete.

Since is symmetric positive definite for and , the linear systems in SOLVER() can be solved using the CG (Conjugate Gradient) method [32, 33]. Notice that convergence rate of the CG method depends on the condition number of coefficient matrix . So we next provide some conditional properties for in SOLVER(). For a square matrix , let denote the condition number of . For a symmetric matrix , let denote the th largest eigenvalue of the matrix for .

Lemma 4 (see [34]). Let and be symmetric matrices. Then for each

Lemma 5. Let , , and . Then for each

Proof. Since and are symmetric, from Lemma 4 one obtainsFrom Lemma 2, is symmetric positive definite and thusFrom (106) and (107), this lemma follows.

Lemma 6. Let , , and . Then for each

Proof. Note that all diagonal elements of the diagonal matrix belong to a set . Hence and . From Lemma 5, this lemma follows.

Theorem 7. Let , , and . If , then, for a given , there exists such that .

Proof. For , let From Lemma 6, . By simple calculation, one hasSince , from (110) and . Also note that and . Thus is a decreasing function which converges to 2, and is a decreasing function which converges to . It follows that for any , and for a given there exists a large such that . Since , for a given there exists a large such that . Hence the proof is complete.

Notice that the condition in Theorem 7 usually holds when is a singular or ill-conditioned matrix. Theorem 7 implies that the condition number of is small if a large number of is chosen. In other words, for a suitably chosen large the coefficient matrix in SOLVER() is well conditioned, so that the CG method for solving the linear systems in SOLVER() converges fast to the exact solution. The split Bregman method is an iterative method which monotonically decreases the residual norm between the original image and noisy image, so we do not have to get an accurate solution every iteration. For this reason, we have used and the CG method with tolerance set to to get an approximate solution of the linear system in SOLVER(). Notice that the CG method always achieves the desired tolerance within 15 iterations.

Finally, one can obtain the split Bregman algorithms for the RLO and AA2 models. In Algorithms 7 and 8, maxit denotes the maximum number of outer iterations, stands for the -norm, and tol denotes the tolerance of the stopping criterion which is set to for the test problems used in this paper. Notice that all images are assumed to have an intensity range of to guarantee convergence of the binomial series used in the split Bregman algorithm.

1: and
2: for   to   do
3:Compute using SOLVER(1)
4:Compute using Equation (100)
5:Compute using Equation (101)
6:if    then
7:Stop
8:end if
9: end for
1: and
2: for   to   do
3:Compute using SOLVER(2)
4:Compute using Equation (100)
5:Compute using Equation (101)
6:if    then
7:Stop
8:end if
9: end for

6. Numerical Experiments

In this section, we provide numerical performance results of the TM, split Bregman, RHPM(2), and RHPM(1) methods for multiplicative noise removal problems. Numerical tests are performed using MATLAB R2016a on a personal computer with 3.6GHz CPU and 8GB RAM. The multiplicative noisy images are generated by multiplying Gaussian white noise or Gamma noise to the clean image using the built-in MATLAB function randn or randg. More specifically, the multiplicative noisy images are generated by where is the clean image, generates the Gaussian white noise with mean and variance , generates the Gamma noise with mean and variance , and is componentwise multiplication of and .

In order to illustrate the efficiency and reliability of the proposed denoising algorithms (i.e., Algorithms 18), we provide numerical results for 4 test images such as the Caribou, Boat, Jet Plane, and Lake. The pixel size of the Caribou image is , and the pixel size of the other 3 test images is . To evaluate the quality of the restored images, we use the peak signal-to-noise ratio (PSNR) between the restored image and original image which is defined by where refers to the Frobenius norm and and are the original and restored images, respectively. Also stands for the value of the original image at the pixel point and is the total number of pixels. It is generally true that the larger PSNR stands for the better quality of the denoised image.

For numerical experiments, the TM method uses the test images with an intensity range of , the split Bregman and RHPM methods use the test images with an intensity range of , and the noise is Gaussian white noise or Gamma noise with mean 1 and variance or . For all test problems, an initial image was set to the observed noisy image , and we have used , , and for the RLO model and for the AA2 model. For the TM and RHPM methods (i.e., Algorithms 16), we have used , and the iteration was terminated when MAE(), defined in Section 2, started to increase. For the split Bregman method (i.e., Algorithms 7 and 8), we have used , and the iteration was terminated when the following stopping criterion was satisfied: where for all test problems.

Tables 1 and 2 provide numerical results for the TM, split Bregman, RHPM(1), and RHPM(2) methods corresponding to the AA2 model with multiplicative Gamma noise and the RLO model with multiplicative Gaussian white noise, respectively. In Tables 1 and 2, represents the PSNR values for the noisy images, Psnr represents the PSNR values for the restored images, parameters and are chosen as the best one by numerical tries, Iter denotes the number of iterations, and Time denotes the elapsed CPU time in seconds.

Figures 14 show the denoised images by the TM, split Bregman, RHPM(1), and RHPM(2) methods for the AA2 model with multiplicative Gamma noise of variance . Figures 58 show the denoised images by the TM, split Bregman, RHPM(1), and RHPM(2) methods for the RLO model with multiplicative Gaussian white noise of variance . The first-row images are the original image, noisy image, and the denoised image by the TM method. The second-row images are the images denoised by the split Bregman, RHPM(1), and RHPM(2) methods.

As can be seen in Tables 1 and 2, the split Bregman method denoises best for variance , while RHPM(2) denoises best for variance . That is, the split Bregman method has the highest PSNR values for variance , while RHPM(2) has the highest PSNR values for variance . Observe that RHPM(1) computes 2 vectors every iteration, while RHPM(2) computes 3 vectors every iteration. It means that the computational cost of RHPM(2) per iteration is more expensive than RHPM(1). However, the convergence rate of RHPM(2) is much faster than RHPM(1), so that total CPU time of RHPM(2) is much less than RHPM(1). In addition, RHPM(2) denoises much better than RHPM(1), and RHPM(2) has the smallest CPU time of all the methods considered in this paper. Overall, the split Bregman and RHPM(2) methods perform better than the RHPM(1) and TM methods. The split Bregman method denoises almost as well as RHPM(2) for most cases, while it denoises significantly better than RHPM(2) for the Jet Plane image with variance 0.03. In this regard, the split Bregman method performs more stably than RHPM(2).

7. Conclusion

In this paper, we have proposed restarted homotopy perturbation methods (RHPM) and split Bregman methods for multiplicative noise removal of the RLO and AA2 models. For the RHPM method, we have used binomial series techniques to settle the main difficulty in handling nonlinear terms. For the split Bregman method, we have used splitting techniques of singular matrices to handle the difficulty in solving ill-conditioned linear systems. Numerical experiments have shown that these techniques work well for all test problems. More specifically, the split Bregman and RHPM methods perform better than the TM method.

Numerical experiments also showed that the split Bregman and RHPM(2) methods perform better than RHPM(1). In addition, the split Bregman method denoises almost as well as RHPM(2) for most cases, but it performs more stably than RHPM(2) for noisy images with large variance. Notice that RHPM(2) has the smallest execution time because of the fastest convergence rate. Based on these facts, the RHPM(2) and split Bregman methods are preferred over RHPM(1), and the split Bregman method is preferred over RHPM(2) for noisy images with large variance. The proposed RHPM and split Bregman methods are only compared with the TM method, so future work will include comparison studies between the proposed methods and other existing methods for multiplicative noise removal models.

Data Availability

All the data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1A09917364).