Mathematical Problems in Engineering

Volume 2019, Article ID 2502731, 11 pages

https://doi.org/10.1155/2019/2502731

## High-Order Total Bounded Variation Model and Its Fast Algorithm for Poissonian Image Restoration

^{1}Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang 330099, Jiangxi, China^{2}College of Science, Nanchang Institute of Technology, Nanchang 330099, Jiangxi, China

Correspondence should be addressed to Chengzhi Deng; nc.ude.tin@zcgned

Received 12 November 2018; Revised 19 January 2019; Accepted 30 January 2019; Published 11 February 2019

Academic Editor: Weimin Han

Copyright © 2019 Jun Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper, a new model for image restoration under Poisson noise based on a high-order total bounded variation is proposed. Existence and uniqueness of its solution are proved. To find the global optimal solution of our strongly convex model, a split Bregman algorithm is introduced. Furthermore, a rigorous convergence theory of the proposed algorithm is established. Experimental results are provided to demonstrate the effectiveness and efficiency of the proposed method over the classic total bounded variation-based model.

#### 1. Introduction

Being signal-dependent, Poisson noise is inevitable in various applications such as positron emission tomography [1], electronic microscopy [2], and astronomical imaging [3]. In particular, the degree of Poisson noise depends on the peak value of pixel intensity values, in the sense that the smaller peak value yields the higher intensity of Poisson noise. Moreover, the Poisson noise magnitude in an image increases with the pixel intensity of the region of this image. Since Poisson noise is different from the Gaussian noise, the restoration models and related algorithms proposed for Gaussian white noise are not effective in removing Poisson noise.

A traditional method for Poisson denoising was proposed by Richardson and Lucy (RL) [4, 5]. However, the RL algorithm converges slowly with noise remained in restored images. A standard approach to remove the noise involves incorporating a regularization term, such as Tikhonov regularization [6], total variation (TV) regularization [7], and high-order regularization [8]. In particular, the TV-based Poisson denoising model [9] can be described aswhere is a bounded open domain in and is a regularization parameter. For this model (1), a number of fast and efficient numerical methods have been proposed, for instance, the dual algorithm [10], the augmented Lagrangian method [11, 12], the split Bregman algorithm [13], the multilevel algorithm [14], and a semi-implicit gradient descent method [15]. These algorithms have also been successfully applied to remove noisy images corrupted by Gaussian, impulse, and multiplicative noises [16–21]. In [11, 13], the following TV-based Poisson restoration model has also been studied:For solving this model, in [11], the authors proposed an augmented Lagrangian method called PIDAL. However, it needs to solve a TV denoising subproblem and thus needs an inner iteration loop. Furthermore, the nonnegative property of the sequence cannot be guaranteed. To deal with these problems, Setzer et al. [13] proposed a split Bregman algorithm called PIDSplit+. Subsequently, Wen et al. [22] proposed primal-dual algorithms for model (2). Their algorithms require less memory and no need to solve any linear systems, and thus are fast. However, they did not state related theories for this Poisson restoration model in their papers.

Replacing the TV regularization term in (2) by [23], Liu and Huang [24] proposed the total bounded variation-based Poisson restoration model as follows:where are parameters, and is a bounded linear operator representing the convolution kernel. Owing to the strongly convex property of , uniqueness of the solution of this model can be guaranteed. Further, the convergence of the proposed algorithm was proved in [24].

The TV-based restoration model performs very well for preserving edges while removing noise. However, it often causes staircase effects in flat regions. To overcome the staircase effects, some high-order models [25–28] have been introduced in image restoration. For restoring blurred images corrupted by Poisson noise, the hybrid regularizations combining TV with high-order variation were also considered [29, 30]. In this paper, we propose the following model by replacing the TV term in (3) with second order term :where is a Banach space. Its definition and related properties were introduced in [17]. For this model, we prove existence and uniqueness of its solution. We use a split Bregman algorithm to compute the resulting minimization problem by introducing new auxiliary variables. Besides, we prove the convergence of our proposed algorithm. The experimental results show that our model (4) avoids undesirable staircase effects. Compared with the split Bregman algorithm for solving the total bounded variation-based Poisson restoration model (3), our method is faster.

The organization of this paper is as follows. In Section 2, we show existence and uniqueness of the solution of our proposed model (4). Section 3 presents a split Bregman algorithm for solving this model. We also study the convergence of the proposed algorithm in this section. In Section 4, experiments are carried out to show the efficiency of our proposed algorithm. We end the paper with some conclusions in Section 5.

#### 2. Existence and Uniqueness

In this section, we prove the existence and uniqueness of the solution for proposed model (4).

Theorem 1. *Model (4) has a minimizer in . Furthermore, if is injective, the minimizer is unique.*

*Proof. *Let with and . For the given , . Thus is strictly convex. Then it holds . So we can choose a minimizing sequence for . By using Hölder’s inequality and Jensen’s inequality, we haveand thus is bounded. Meanwhile, considering that and the boundedness property of , we know that is a bounded sequence in . By the compactness theory, there is such that a subsequence converges to a.e. By the lower semicontinuity of the norm, and . Further, by using Fatou’s lemma, we getThus is the minimizer of model (4).

Obviously, if is injective, the high-order total variation term and the fidelity term in (4) are convex. Since is a strongly convex function for a given constant , is strongly convex. Therefore, its minimizer is unique.

#### 3. The Split Bregman Algorithm

We describe the split Bregman algorithm (SBA) for solving the proposed model (4), together with convergence analysis. The SBA is an efficient method, which was initially introduced by Goldstein and Osher to solve the general -regularized problems [31]. It has been successfully applied to minimization problems that involve nondifferentiable and high-order functionals [19, 24, 32].

##### 3.1. Algorithm Description

It is straightforward that the proposed model (4) is equivalent to the following constrained optimization problem:

To solve this problem, we convert it into an unconstrained problem by applying the penalty method here:where are two penalty parameters. Consequently, the SBA for solving (4) can be described as follows:

For the -subproblem, the corresponding Euler-Lagrange equation isWe assume periodic boundary condition, and hence (10) can be solved efficiently by the FFT, i.e.,where and represent the adjoint operators of and , respectively, and denotes the Fourier transform of .

There are closed-form solutions for both and subproblems. In particular, the solution of -subproblem is given by the shrinkage operator, and the solution of -subproblem is obtained by using the quadratic formula, i.e.,

We summarize the overall algorithm in Algorithm 2.

*Algorithm 2 (SBA for solving (4)). *

*Step 1*. Input . Initialization: , and . Let .

*Step 2*. Compute by (11).

*Step 3*. Compute by (12).

*Step 4*. Compute by (13).

*Step 5*. Update by the last two equations of (9).

*Step 6*. Stop if the stopping criterion is satisfied. Otherwise, let and go to Step 2.

##### 3.2. Convergence Analysis

Since our model (4) is strictly convex, there exists a unique solution as guaranteed by Theorem 1. Below we give a rigorous proof that the proposed algorithm (Algorithm 2) converges to the unique solution. Note that the following analysis is stemmed from [24, 33], where the analysis of the unconstrained split Bregman iteration was presented in detail.

Theorem 3. *Assume that K is injective and . Then the sequence generated by Algorithm 2 converges to the unique solution of model (4).*

*Proof. *Let be the sequence generated by Algorithm 2. It follows from the first-order optimality conditions at thatwhere . On the other hand, the Karush-Kuhn-Tucker (KKT) conditions for (4) read aswhere with and with . We further denote so as to get the following set of conditions:By comparing (14) and (16), we can conclude that is a fixed point of (9). We then introduce new variables to represent the errors, i.e., and By subtracting each equation in (16) from the corresponding one in (14), we getThen for (17), taking the inner product of the first three equations with respect to , and , respectively, and squaring both sides of the last two equations, we obtain Obviously, the last two equations of (19) can be rewritten as By combining (20) with the first three equations in (19), we haveSumming (21) bilaterally from to yieldsTherefore, it is straightforward to haveSince is convex, , and , we haveAnd note that is a bounded sequence in . Therefore, all terms in (23) are nonnegative. Letting , inequality (23) suggests the following conclusions:which implies thatFor any convex function , the Bregman distance satisfiesThis together with the second equality of (26) and the nonnegativity of the Bregman distance implies thatRecall that . And by the fourth equality of (26), we obtain In the same way, we also have Combining this with (29) yields Finally, by this and equality (15), we get

#### 4. Numerical Experiments

We present experimental results to show the performance of our proposed method (4), referred to as BLLTSB, in comparison with the total bounded variation-based Poisson restoration model (3), referred to as BTVSB. All the experiments were conducted in MATLAB environment on a PC with a 3.4GHz CPU processor and 16G RAM. To assess the restoration performance qualitatively, we use the peak signal to noise ratio (PSNR) defined bywhere and are the pixel values of the original and restored images, respectively. We also use the structural similarity (SSIM) [34] for quantitative comparison. The stopping criterion in all the experiments isor the iteration reaches 500.

We choose Lena (), Train (), and Barbara () as the test images, as shown in Figure 1, and two types of blurring kernels, Gaussian blur and motion blur (we use the MATLAB commands* fspecial (‘gaussian’, **, **)* and* fspecial (‘motion’, 10,45)* to generate Gaussian blur and motion blur, respectively). The blurry and noisy images are simulated as follows. The original image is first convolved with the blur kernel , followed by additive Poisson noise (the Poisson noise is generated by using the MATLAB function* poissrnd*). Since Poisson noise is data-dependent, the noise level of the observed images depends on the pixel intensity. To test different noise levels, we use* poissrnd*, where is the blurred image and is set to be or 5.