Abstract

To overcome the staircasing effects introduced by the TV regularization in image restoration, this paper investigates a fourth-order partial differential equation (PDE) filter for removing Poisson noise. In consideration of the slow convergence property of the classical gradient descent method, we adopt the alternating minimization algorithm for realizing this scheme. Compared with the corresponding total variation based one, numerical simulations distinctly indicate the superiority of our proposed strategy in handling smooth regions of Poissonian images and improving the computational speed.

1. Introduction

Noise removal occupies an important position in image processing. The classical total variation (TV) regularization model proposed by Rudin et al. [1] can be characterized as where stands for a spatial domain with Lipschitz boundary, and are the clean image and the initial noisy image, respectively, and represents the distributional derivative of . This strategy can do a better job in preserving the sharp edges while removing additive Gaussian noise. However, in the real world, many important images, for instance, medical imaging [2, 3], astronomy [4], were contaminated by Poisson noise more frequently than additive Gaussian noise. Although the ROF model (1.1) is effective for removing Gaussian noise, it is inferior in processing Poissonian images, which contain noise that is signal dependent. To remove Poisson noise effectively, Le et al. [5] introduced the following TV-regularization model where must be positive almost everywhere. Numerical experiments demonstrate the better performance of scheme (1.2) for Poisson noise removal than model (1.1).

Although model (1.1) and model (1.2) have been indicated to be capable of achieving a good tradeoff between noise removal and edge preservation, unfortunately, the unpleasant blocky effect inevitably emerges in the homogeneous flat regions of restoration images owing to the use of second-order scheme. One of the most well-known solutions to overcome this drawback is to use high-order partial differential equations instead of second-order ones. Up till the present, high-order diffusion strategies have made great progress in effectively suppressing Gaussian noise such as schemes in [615]. Thereinto, the model proposed by Lysaker et al. [10] for Gaussian noise removal has been paid more attention, which can be formulated as Numerical experiments obviously exhibit the superiority of the model mentioned above over second-order version in relieving the staircase artifacts.

Motivated by the above models (1.2) and (1.3), we propose the following fourth-order PDE regularization scheme to suppress Poisson noise: In consideration of the slow convergence speed of the traditional implicit steepest descent method, we put forward the alternating minimization algorithm for obtaining its numerical solution.

In this paper, we have two principal contributions. For one thing, a fourth-order regularization strategy for restoring Poissonian image is proposed, which substantially improves the quality of the reconstructed images, especially in relieving staircase artifacts. For another, we introduce the alternating minimization (AM) algorithm for solving our proposed fourth-order scheme (1.4).

The remainder of this paper is organized as follows. Section 2 shows the existence and uniqueness of the solution of our proposed model (1.4). Section 3 focuses on the numerical algorithm for solving the proposed strategy. The convergence analysis is arranged in Section 4. Results of numerical experiments to indicate the advantage of our approach in avoiding blocky effects and enhancing convergence speed, compared with the classical TV regularization based one, are shown in Section 5. Finally, we make the conclusion in Section 6.

2. Existence and Uniqueness

In this section, the existence and uniqueness of the minimizer for the model (1.4) are proved. Some basic notations and properties concerning with the space can be seen in [8, 13]. Motivated by [5], we have the following existence and uniqueness result for the optimization problem (1.4).

Theorem 2.1. Assume is an open, bounded Lipschitz domain in and is a positive bounded function. Let such that . Then there exists a unique minimizer for model (1.4) in .

Proof. Obviously, the objective functional is lower bounded. Then there exists a minimizing sequence . Let . Then, using Jensen inequality, we conclude that which indicates that is bounded. This, together with the boundedness of , yields that is bounded in . Then, following the compactness theorem in space, we deduce that there exists a function such that a subsequence of denoted also by converges to a.e. in . In addition, by the lower semicontinuity of the space, we have . Combining this with Fatou's Lemma, we obtain which ensures that is the minimizer of (1.4). Moreover, notice that the optimization problem (1.4) is strictly convex, then is unique by convex analysis [16]. This completes the proof.

3. Computational Method

This section focuses on minimizing the energy functional of the model (1.4). Generally speaking, minimizing (1.4) can be achieved by the traditional gradient descent method [1, 17], which amounts to solve its formal Euler-Lagrange equation in : where . Introducing the auxiliary time variable , we derive the associated heat flow Here, we omit the boundary conditions, but it can be deduced easily. Finite difference approaches [1, 1820] can be chosen to resolve problem (3.2). As everyone knows, is usually replaced by because of the nondifferentiability of , where is a small positive parameter. The tradeoff in choosing the smoothing parameter is the reconstruction error versus the convergence speed. Therefore, exploring fast numerical computational methods is an important assignment. In the last few years, many effective algorithms have sprung up for handling problems like this. However, a great many current algorithms cannot be directly applied to problem (1.4) because of its nonquadratic and nonsmooth higher order properties, for instance, dual algorithm [2022] and split Bregman iteration [23].

Motivated by [2426], we adopt the alternating minimization (AM) algorithm to acquire the optimal solution of the scheme (1.4). Take the numeric span of the variable of (1.4) into consideration, we substitute the variable for . So instead of the model (1.4), we consider the following minimization problem: Owing to the calculation difficulties brought by the non-differentiability term , we introduce an auxiliary variable to add a quadratic fitting term to the model (3.3). Then the following approximation scheme for problem (3.3) is achieved. Here, and are two positive parameters. Usually, is larger than , since relatively small will result in inferior restoration quality. While for many problems, too large value for makes (3.4) extremely difficult to solve numerically. Notice that the objective function of problem (3.4) is strictly convex and differential with respect to the first variable , then the approach (3.4) can be decomposed into two decoupled subproblems, namely,

For the subproblem, the objective function is strictly convex, then the solution is unique. In view of the differentiability of the objective function with regard to , to determine the solution of the first equation of the system (3.5) amounts to solve Obviously, Newton iteration method is a good choice for efficiently solving these kinds of problems.

As for the second subproblem, we make use of dual algorithm, which was first introduced by Chen et al. [20] for TV regularization and then was extended to the fourth-order situation for solving model (1.3) [21]. The dual formulas for the subproblem in the discrete setting can be briefly depicted as Subsequently, for all time steps   with , as , we get the minimizer of the subproblem

Then, given the initial value , based on the above-mentioned AM approach, we obtain the sequence given below Thanks to the computationally inexpensive of both the subproblems and , the AM approach occupies advantage in convergence speed so that the calculation time cost is cut down to a great extent.

4. Convergence Analysis

Let , , and . Then we have =. By Opial theorem [27], firstly, we should show that the operator is nonexpansive and asymptotically regular. Recall that for a proper, convex, lower semicontinuous function , the operator : is called the proximal operator of . According to the definition of proximal operator, it is easy to check that the operators and are proximal operators of and , respectively. Combining this with Lemma 2.4 [28], we obtain that both and are nonexpansive. Then is nonexpansive.

Lemma 4.1. The operator is asymptotically regular.

Proof. For an initial guess, . To show, is asymptotically regular, by the definition, we have to show that the sequence tends to zero as , that is, Let and . Then, By the Taylor expansion of in the second variable and the fact that , we have Further, considering the convexity property of , we obtain (4.2), (4.3), together with (4.4), deduce Notice that , as is the minimizer of . In addition, combining (4.5) with the fact that , we have Therefore, which guarantees that tends to zero as .

Lemma 4.2. The set of fixed points of is nonempty.

Proof. First of all, similar to [25], we can derive that the function is coercive. Moreover, as is closed and proper, by the Weierstrass's Theorem [16], we know that the set of minimizers of is nonempty. Suppose is a minimizer of , then we have It follows that Therefore, , which indicates that is a fixed point of .

Then, by Opial theorem [27], we conclude that converges to a fixed point of . As is differentiable with regard to the second variable and is strictly convex, then the unique minimizer of is just the fixed point of . Hence, converges to the unique minimizer of .

5. Numerical Experiments

In this section, we pay attention to illustrate the efficiency and superiority of our proposed scheme. We compare the TV-regularization-based model (1.2) by gradient descent method with the fourth-order (1.4) by gradient descent method and the fourth-order scheme (3.4) by alternating minimization method, that is, Algorithm 1, in measuring the convergence speed and image restoration quality. Let GD-(1.2), GD-(1.4), and AM-(3.4) denote the above three schemes, respectively. The performance of the three schemes will be tested on three images: “lenna”, “pepper”, and “panda”. All of our experiments are processed under Windows XP and MATLAB R2009a. The Poisson noise is added by using the MATLAB library function imnoise (I, “poisson”). It is worth to mention that the poisson noise magnitude in an image increases with the intensity of the region of the image, which is different from Gaussian noise that is signal independent. We choose time step for all the following experiments.

For well assessing the quality of recovered images, the relative error (ReErr) [25] is adopted as follows: where and represent reconstructed image and the original image, respectively. Generally speaking, the smaller the ReErr value, the better the restored image quality.

Example 5.1. The first test image “lenna” (Figure 1(a)) is a gray image of size pixels. The corresponding degraded image corrupted by Poisson noise is demonstrated in Figure 1(b). Figures 1(c), 1(d), and 1(e) denote the restoration images via 460 iterations by GD-(1.2), 280 iterations by GD-(1.4), and 45 iterations by AM-(3.4), respectively. Their corresponding enlarged partial images are displayed in Figures 2(a)2(e). Table 1 lists the number of iterations, computational time, and the value of of the three methods about this experiment. From Table 1, we can see that the AM-(3.4) scheme can achieve lower value of with less time than GD-(1.2) and GD-(1.4). Comparing GD-(1.2) with GD-(1.4), although GD-(1.2) costs less time per iteration, GD-(1.4) is slightly fast as a whole. In terms of the restoration result, the results images processed by the fourth-order approaches GD-(1.4) (see Figure 2(d)) and AM-(3.4) (see Figure 2(e)) are similar, but the one (see Figure 2(c)) processed by second-order approach GD-(1.2) suffers from disgusting staircasing effects just as the ROF model for Gaussian noise removal. It is turned out that fourth-order approaches successfully overcome the piecewise constant of the solution and can obtain relative natural recovered results than the second-order one. The parameters with respect to the above test are .

Example 5.2. To further evaluate the performance of our proposed fourth-order scheme by AM method, we test the three methods on the image “pepper” (Figure 3(b)). The corresponding parameters used in this experiment are set as follows: . From Figure 5(c), we can see that the staircasing effects still appears in the homogeneous flat regions of restoration image by second-order scheme (1.2). Figures 5(d) and 5(e) denote the results processed by fourth-order PDE, which are more close to the original one. The number of iterations, computational time, and value of of the three methods about this experiment are listed in Table 2, and the relations between and the number of iterations are illustrated in Figure 4. Figure 4 indicates that the AM algorithm for (3.4) converges so fast that 50 iterations are enough to obtain relative ideal result with , but the numbers of iterations needed by GD-(1.2) (see Figure 5(a)) and GD-(1.4) (see Figure 5(b)) are about 700 and 350, respectively. As a result, our proposed scheme, that is, AM-(3.4), can obtain higher quality restoration images with less time, especially for some larger images.

Example 5.3. We use the image “panda” (Figure 6(a)) as our third test image. Unlike “lenna” and “pepper”, the image “panda” has more small details. This experiment will show that our proposed scheme also performs well on the images with small details. The degenerated “panda” corrupted by Poisson noise is displayed in Figure 6(b). Parameters are chosen to be . Table 3 lists the number of iterations, computational time, and of the three methods about this experiment. The relation between value of and the number of iterations can be seen in Figure 7. From Table 3 and Figure 7, we can see that fourth-order schemes can achieve higher quality images, evaluated by lower value of , than the second one. Moreover, our proposed AM-(3.4) outperform the other both in reducing numerical computational time and image restoration quality.

6. Conclusion

This paper investigates a fourth-order regularization based scheme for restoring corrupted images by Poisson noise. The existence and uniqueness of the minimizer for the proposed model are presented in detail. To quickly obtain the optimum solution of our proposed scheme, the alternating minimization method is employed. Compared with the recovered results by the TV-regularization-based one, simulation results evidently demonstrate the competitive superiority of our proposed approach especially in overcoming staircase effects and improving the computational speed while removing Poisson noise.

Initialization: and ;
.
 Compute according to the first equation of (3.5) for fixed .
 Compute according to the second equation of (3.5) for fixed .
End
 Compute .

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11071061) and the National Basic Research Program of China (2010CB334706).