Abstract

This paper proposes a nonconvex model (called LogTVSCAD) for deblurring images with impulsive noises, using the log-function penalty as the regularizer and adopting the smoothly clipped absolute deviation (SCAD) function as the data-fitting term. The proposed nonconvex model can effectively overcome the poor performance of the classical TVL1 model for high-level impulsive noise. A difference of convex functions algorithm (DCA) is proposed to solve the nonconvex model. For the model subproblem, we consider the alternating direction method of multipliers (ADMM) algorithm to solve it. The global convergence is discussed based on Kurdyka–Lojasiewicz. Experimental results show the advantages of the proposed nonconvex model over existing models.

1. Introduction

Image deblurring is a hot research topic of digital image processing, which is widely used in engineering and medicine fields [1]. In this paper, we focus on how to recover an image degraded by blur and impulsive noise. In this paper, we focus on how to recover an image degraded by blur and impulsive noise. Image blur noise may result from inaccurate focus, object relative movement, and optical degradation in the process of digital image acquisition and transmission. Impulse noise, such as salt-and-pepper noise (SP) and random-value noise (RV), is caused in the storage and transmission process dues to low-quality sensors or electromagnetic interference [2]. The mathematical model of image deblurring is usually expressed aswhere denotes the observed noisy and blurry image, represents the formation mechanism of the impulsive noise, and and denote a bounded blurring operator and the original image, respectively. For given the blurring operator , our goal is to recover the original image from the observation . In general, the operator matrix is often ill-conditioned, which cannot recover the original image from by direct inversion. To stabilize the recovery of , one popular approach is the variational method, which includes a data fitting term and a regularization term. Rudin, Osher and Fatemi [3] first proposed total variation (TV) regularization model, which is widely used, for it can better keep the object boundaries information of the signal [4, 5]. The popular model iswhere and are the original image and the observed image, respectively; denotes the linear blurring; is the regularization parameter used to balance the regularization term and data-fitting term. This TVL2 model is optimal when the measurement noise is Gaussian distributed. However, non-Gaussian noise is more common in practice, and the performance of -norm based methods may severely degrade. TVL1 model combining TV regularization and -norm [69] was proposed to deal with impulsive noise (a typical non-Gaussian noise). Its mathematical formulation can be expressed as follows:where is called the TV norm of the variable , which can take the -norm and -norm, i.e.,where is a finite difference operator. Generally, the TVL1 model with anisotropic TV norm can express a linear system, which is easier to deal with than the isotropic one. However, the isotropic TV norm is more realistic and more effective [10]. Numerically, some efficient algorithms have been proposed to solve the TVL1 model (3), such as the split Bregman [11, 12], the primal-dual method [7, 8, 13], and the alternating direction method of multipliers [9, 14, 15].

However, the classical TV norm regularization model often underestimates the amplitudes of signal discontinuities [16, 17]. For high-level impulsive noise, the solution of TVL1 model (3) is biased, because the penalties of data fitting term for all data are equal [15]. In order to improve the performance of image restoration, nonconvex approaches are considered, e.g., the smoothly clipped absolute deviation (SCAD) [18], -norm [19, 20], log-function [21, 22], and minimax-concave penalty (MCP) [23, 24]. This nonconvex technique plays an increasingly important role in solving image restoration problems. Because the nonconvex model can obtain a better approximation solution, it can improve the bias problem of the -norm [2528]. In [26], the authors have developed a nonconvex model called TVSCAD with the SCAD penalty function as data fitting term. In this model, they suggested that if the observation data is not severely damaged, data fitting should be enforced; otherwise, less or null penalize those data. Based on this work, the authors [27] very recently proposed a TV-Log model via using the TV as regularizer and Log penalty function as data fitting.

In this paper, we continue to study the problem of image restoration with impulse noise. Our goal is to obtain a higher quality recovery solution through the newly constructed nonconvex model. Using the Log-function penalty as a nonconvex regularizer and the SACD-function penalty as a data fitting term, a nonconvex model is proposed:where is the parameter. Note that here we add the bound constraint , which can improve the image recovery quality [26]. and are defined aswhere are the threshold parameters. This is a “nonconvex + nonconvex” model, which can have some desirable properties simultaneously. Such penalty of concave functions and for all elements is nonuniform, which makes and closer to -norm than -norm. This result can be easily seen in Figure 1. Since the proposed model is nonconvex and nonsmooth, it is difficult to find an effective algorithm. To solve the proposed nonconvex model, we combine the difference-of-convex algorithm (DCA) [29, 30] with the proximal splitting method [31, 32]. In summary, the main contributions of this article are as follows:(1)A new “nonconvex + nonconvex” model for image restoration with impulsive noise is proposed. The core idea is that the Log-function penalty as a regularizer and the SACD-function penalty as a data fidelity term are used. Therefore, this model approximates the -norm more closely than -norm and is useful in image restoration with impulse noise.(2)To solve the nonconvex model, we consider the DCA method with ADMM, which has been efficiently used in many nonconvex optimization problems. Then, we prove the proposed algorithm that is globally convergent.(3)Numerical examples show the effectiveness of the proposed LogTVSCAD method, and we compare it with other recovery algorithms.

The rest of this paper is organized as follows. Section 2 gives some notations and preliminaries. The nonconvex LogTVSCAD model and a DCA method with ADMM are shown in Section 3. Then, in Section 4, we prove that the proposed algorithm converges to a stationary point. Section 5 presents the experimental results, which illustrate the effectiveness of the new nonconvex model. Finally, some conclusions are given.

2. Notations and Preliminaries

In this section, we first give some notations. Then, some properties of the Log-function and SCAD-function penalty are given. Next, we show the definitions of the subdifferentials and basic properties of the Kurdyka–Lojasiewicz functions [33]. These conclusions will be used later in the proof of convergence.

For any vector , or () denote their inner product; denotes -norm; and stand for the gradient and subdifferential of the function at , respectively; is the signum function. Now, we introduce some properties of the functions and . For fixed and , and are continuous on , also increasing, continuously differentiable, and concave on , as illustrated in Figure 1. Next, we look at another functions and , which are induced by and :where and are given by (7). Without loss of generality, we consider the multivariate generalization of the functions and :

In fact, the functions and are continuously differentiable and convex on , and then the log function penalty and the SCAD function penalty can be expressed as

Note that the functions (10) play an important role in the later algorithm construction. The following useful definitions and properties can be obtained from the literature [3436].

Definition 1. If , then the function is lower semicontinuous at a point . A function is said to be lower semicontinuous in its domain of definition if it is lower semicontinuous at all .

Definition 2. For an extended-real-valued, proper and lower semicontinuous function , its subdifferential at is defined aswhere .

If is convex function, then the subdifferential is such that

Furthermore, if is continuously differentiable at , then . If the point is a minimizer of , a necessary condition is , which is named a stationary or critical point of .

Definition 3. Let be a proper and lower semicontinuous function; the proximity operator is defined aswhere is a penalty parameter.

It is well known that the proximal operator is particularly useful in convex optimization.

Definition 4. A function is called to possess the (KL) property at a point if there exist , a neighborhood of and a continuous concave function such that(i), is continuously differentiable and for all (ii) satisfying ; it holds that A proper closed function is named a KL function if it has the KL property at all points in .

3. Model and Algorithm

In this section, we first propose a nonconvex model for image restoration and then use DC programming to give the ADMM algorithm to solve the model.

In the definitions of (10), we consider replacing with gradient and respectively, which leads to our definition of the LogTVSCAD model as follows:i.e.,

Because Log and SCAD penalty functions are nonconvex, this model is nonconvex. To address this nonconvex model, set , , and problem (15) can be expressed as the difference of the convex functions and , i.e.,

This is a DC programming problem, which has been efficiently used in many nonconvex optimization problems; for more details, please see [30, 37]. According to the classic DC algorithm (DCA) iteration, for (16), we havewhere . To obtain a more accurate solution, we adopt the suggestion of the literature [26] and add a proximal term in our DCA iterations,where is a given proximal parameter. For (15), the next iteration can be expressed as

By omitting the constants of formula (19), we have

For the above model (20), under certain conditions, its objective function is strongly concave, and has a unique solution in every step. To solve the problem of (20), we first introduce auxiliary variables and define

Then, we rewrite (20) as a constrained minimization problem:

Let the augmented Lagrangian function of model (22) bewhere are Lagrange multipliers and are penalty parameters. Given and , according to the classical ADMM, the iterative scheme of the problem (23) can be expressed as follows:

In this scheme, the ADMM method is directly applied to blocks of variables and . Furthermore, via Theorem 3.2 in [9], we can ensure that the proposed ADMM (24) for solving the subproblem is convergent. In fact, and in (24) are separable from each other, so this optimization problem can be performed in parallel. Moreover, via the definition of the proximity operator, we can get the explicit solution of and . In addition, can be computed by a simple projection onto box . Hence, the optimizations have a closed-form solution aswhere is the signum function. Then, we consider how to solve the -subproblem of (24). Via the first-order optimality conditions, the corresponding normal equation iswhere is nonsingular under certain conditions. For problem (28), there is an efficient solution by an inverse fast Fourier transforms [14].

Finally, we propose an ADMM algorithm for solving the proposed LogTVSCAD model (5).

3.1. Algorithm (LogTVSCAD)
Step 0 Initialization and date:Input parameters , the tolerance . Given , let ;Step 1 Given , , and , compute the new iterate by (29);For ;where , ,and are given by (25), (26) and (28), respectively. If , break,endStep 2 Set , if , STOP; otherwise let . Go back to step 1.Note: the algorithm is composed of inner and outer loops. The inner loop uses the ADMM algorithm to solve the subproblems, and the outer loop uses the DC programming framework to solve the nonconvex model.

4. Convergence Analysis

In this section, we analyze the convergence of Algorithm LogTVSCAD. First, we present the following lemma, which is the basis for proving the global convergence.

Lemma 1. For given parameter , the sequence generated by Algorithm LogTVSCAD satisfies .

Proof. By the definitions of and (28), we obtainFollowing (20), we haveWe obtain from (30) that It is easy to see that and are semialgebraic from their definitions. We also know from the literature [26, 27] that the penalty functions SCAD and the Log enjoy the KL property. Then, the following result is obtained.

Lemma 2. For given parameter , the function is a function.

Lemma 3. Let , for a sufficiently large constant , and then

Proof. It follows from (20) thatAccording to [35], and are Lipschitz continuous. Moreover, is sufficiently decreasing from Lemma 1. Thus, we obtain , where is a sufficiently large constant.

Theorem 1. Let the sequence generated by Algorithm LogTVSCAD, and then it converges to a critical point of (5).

Proof. Since is bounded, it follows from Lemma 1 thatThen, , . Let be any accumulation point of the sequence . There exists a subsequence such thatThus, combining Lemma 2 and 3 and Theorem 2.9 [38], the result is right and .

5. Numerical Results

In this section, we evaluate the performance of the proposed LogTVSCAD method through some numerical experiments. We provide the results of classical TVL1 [9] for reference purpose, and we also compare with TVSCAD nonconvex model [26] and TVLog nonconvex model [27]. All of our test experiments are performed on MATLAB R2015a on the PC with Intel(R) Core(TM) 2.2 GHz CPU and 8.0 GB RAM.

To assess the quality of the recovered image, we use the peak signal-to-noise ratio (PSNR) and the signal-to-noise ratio (SNR) as the evaluation indicators, because they are widely used in image processing kinds of literature. The higher PSNR and SNR indicate better quality image; they are defined as follows:where is the size of the image and and denote the original image and restored image, respectively. is the mean intensity value of . The structural similarity (SSIM) is another effective evaluation index for comparing the image quality, and the SSIM value closer to the one showing better structure preservation; for more details, please see [39].

In our experiments, we choose two images of House (256) and Peppers (512) as the test images, because they are widely used in the academic research work of image processing. The three types of blurs tested are generated by the MATLAB function , and they are Gaussian (hsize , std ), motion (len , angle ), and average (hsize ) blurs, respectively. Two common types of impulsive noise are tested: salt-and-pepper (SP) and random-valued (RV) noises. It is well known that RV noise is more difficult to remove than SP noise. In the experiments, we tested with various noise levels: 60%, 90% SP noise, and 70% RV noise.

The selection of parameters in models and algorithms is an open question. If we use the same parameters, the performances of different methods may have opposite results. The best parameters selection of TVL1, TVSCAD, and TVLog adopts the recommendations of the literatures [26, 27]. For the regularization parameter , we swept over

The penalty parameters of (24) are chosen from

In LogTVSCAD model, we set the parameters , , , and for the SP noise. For RV noise, we set the parameters , and , where denotes the number of iterations. From all the tested cases, we can see that the performance of the three nonconvex models (TVSCAD, TVLog, and LogTVSCAD) outperforms that of the TVL1 model. Figures 24 show that the three nonconvex models have approximately the same effect at low noise level. In Figures 510, we show the restored results by TVL1, TVSCAD, TVLog, and LogTVSCAD on test images House (256) and Peppers (512) with Gaussian blur, Motion blur, and Average blur, and the SP noise level is 90%, respectively. These experimental results show that our LogTVSCAD is slightly better than the other methods at the high noise level. For the 70% RV noise, the higher PSNR values of Figures 1116 indicate that the LogTVSCAD method is also better than the other three methods. Further, all the test results including CPU time, SNR, and SSIM are listed in Table 1. Through the table, we note that the LogTVSCAD model achieves higher PSNR and SSIM values in various kinds of cases. The experimental results show that the LogTVSCAD model has a better performance for both low and high noise levels. The results in Table 1 also show that the TVL1 model has superiority in CPU time cost.

Figure 17 shows the convergence curves of the three methods of LogTVSCAD, TVLog, and TVSCAD. For the Peppers image with 90% SP noise, Figures 17(a)17(c) show the plots of SNR versus iteration number under Gaussian blur, Motion blur, and Average blur, respectively. It is easy to see that the SNR values increase as the number of iterations increases. The curves of LogTVSCAD are slightly higher than those of TVLog and TVSCAD. From Figures 17(d)17(f), for 70% RV noise, we can see that the proposed LogTVSCAD has better performance than TVSCAD and TVLog. Hence, the proposed new model is effective and desirable.

In the end, we compare our method to the recent other nonconvex work (called L0TV) [40]. The authors reformulated the L0TV model as an equivalent biconvex mathematical program with equilibrium constraints and then solved it using the proximal ADMM. In the experiments, we chose three other typical images to test, i.e., satellite (256), macaws (512), and boat (1024), which are shown in Figure 18. The blur types still adopt the motion, average, and Gaussian blurs. In the experiments, we tested 60% SP noise, 90% SP noise, and 70% RV noise. In terms of parameter settings, satellite (256) and macaws (512) images adopt the same settings as before. For boat (1024) image, we consider selecting , and other parameters remain unchanged. The parameters selection of L0TV method adopt the recommendations of the literature [40]. Figures 1921 show some visual comparisons of image restorations. We observe that the L0TV and LogTVSCAD can restore the blur and noise images well. In Figure 21, the PSNR values of both methods are greater than 30, which shows that both methods are effective for low-level noise. The numerical comparison between the two methods is shown in Table 2. It is obvious that the proposed LogTVSCAD also performs well.

6. Conclusions

In this paper, we proposed a new LogTVSCAD model for image restoration with impulsive noise. To solve the nonconvex model, we firstly apply the DCA to reformulate the nonconvex problem and then use the ADMM method to solve the subproblem. The global convergence of the proposed algorithm is proved. The experimental results on recovering images show that the proposed LogTVSCAD model is an effective approach in impulsive noise and is competitive with TVL1, TVSCAD, and TVLog. In future work, we will consider the application of this method in other fields and investigate other nonconvex reconstruction methods.

Data Availability

The data used to support the findings of this study are available from the corresponding authors upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 11901137 and 61967004, in part by the China Postdoctoral Science Foundation under Grant 2020M682959, in part by the Natural Science Foundation of Guangxi Province under Grant 2018GXNSFBA281023, in part by Scientific Research Fund of Hunan Provincial Education Department under Grant 20A273, and in part by Research Fund of Mathematics Discipline of Hunan University of Humanities, Science and Technology under Grant 2020SXJJ01.