Abstract

Image restoration is an interesting ill-posed problem. It plays a critical role in the concept of image processing. We are looking for an image that is as near to the original as possible among images that have been skewed by Gaussian and additive noise. Image deconstruction is a technique for restoring a noisy image after it has been captured. The numerical results achieved by the prox-penalty method and the split Bregman algorithm for anisotropic and isotropic TV denoising problems in terms of image quality, convergence, and signal noise rate (SNR) are compared in this paper. It should be mentioned that isotropic TV denoising is faster than anisotropic. Experimental results indicate that the prox algorithm produces the best high-quality output (clean, not smooth, and textures are preserved). In particular, we obtained (21.4, 21) the SNR of the denoising image by the prox for sigma 0.08 and 0.501, such as we obtained (10.0884, 10.1155) the SNR of the denoising image by the anisotropic TV and the isotropic TV for sigma 0.08 and (-1.4635, -1.4733) for sigma 0.501.

1. Introduction

Image processing is a subset of signal processing that focuses on images and videos. All procedures done on an image in order to increase readability and facilitate interpretation are referred to as image processing. In the industry, image restoration is an important topic. This is a fundamental problem in a variety of applied sciences, including medical imaging [1] [2], microscopy and astronomy [3], film restoration, and image and video coding [4] [5]. Image restoration is an interesting subject in image processing since it occurs at the very beginning of the acquisition chain and involves recovering a clean original image from a degraded image [3] [2].

By applying a proximal algorithm to solve a minimization problem, we propose a unique technique for image restoration. It is believed that an original image has been deteriorated by additive noise.

We are attempting to rebuild from the image we have seen (which is therefore a degraded version of the original image ). The maximum likelihood technique leads us to seek as a solution to the following optimization problems, assuming that the additive noise is Gaussian: where the admissible set is determined by and represents the total variation in as specified by

Total variation regularization (TV) is a regularization term that allows for discontinuous solutions. Because of its capacity to include “jumps” in the solution, it has become particularly popular for image rendering. The space of bounded variation function is known as . It is characterized by

To tackle this problem, we will apply the proximal penalty approach (Aujol [6]; Micchelliy, Shenz, and Xux [7]). To accomplish this, we state the method’s idea. The fact that the problem is equivalent to the problem is the principle behind it. where indicates the indicator function defined by

Definition 1. [7] We denote by the usual d-dimensional Euclidean space.

Let be a real-valued convex function on . For all , the proximal operator of is defined by

We give the following three examples.

Example 1. If and , then

Example 2. If and , then with is the transpose of a line vector.

Example 3. If and , then

2. General Principle of the Proximal-Penalty Methods

According to [812], the following is the general principle of the proximal-penalty methods: (1)Replace the problem with the problem that has no constraints

The penalty coefficient is whith . The external penalty function is known as where , and is the projection of in where

When , the obtained solution is a solution of . (2)We begin by selecting a penalty coefficient and then solve the problem without constraintslet be the obtained point. (3)The stop test: is a good approximation of the optimum if the amount is sufficiently small; otherwise, a penalty coefficient will be determined and the following new problem will be solved without constraint

The problem is linked to

The relaxation algorithm transforms and generates a sequence such that is a solution to the problem when we applied to this problem. and the solution of the problem is

As a result, a simple iteration solves the following problem:

Step 0: Let , be a precision.
Step 1: we use the minimization approach to find a solution to the following problem using a penalty coefficient and a precision .
      
Step 2: the solution obtained is .
If and if    then is an excellent approximation of the optimal solution, and the calculations stop at iteration
Otherwise, we use as a penalty coefficient. We put   , and and return to step 1.

3. Bregman Algorithms and Imaging

3.1. Bregman Projection

Definition 2 (Bregman distance). Let be a Banach space, be a lower semicontinuous proper convex function, and let be a nonempty closed convex set. Suppose is Gâteaux differentiable in with its Gâteaux derivative denoted by . The Bregman distance associated with is a function defined as follows: with is the interior of the domain

Let

Then, there is a K-L divergence (Kullback-Leibler divergence). is a Bregman distance.

Remark 1. The following are some historical notes: (i)Bregman was the first to adopt this distance measurement in [13](ii)Censor and Lent created and developed the concept [14](iii)In 1976, Bregman devised a simple and effective method for using the function in the design and analysis of feasibility and optimization algorithms. This has spawned a burgeoning field of research in which Bregman’s technique is used to create and analyze iterative algorithms for nonlinear applications, not only to address feasibility and optimization problems but also to solve variational inequalities and calculate fixed points. More information can be found in the sources [15] [16] [17].

Definition 3 (Bregman projection). Let be a lower semicontinuous smooth convex function. Let be a closed convex set in with . The approximation problem thus permits a single solution , known as the Bregman projection of over defined by

3.2. Bregman Algorithm

The iterative technique of Bregman was first introduced and studied in the field of image processing by Osher et al. Osher et al. proposed the iterative Bregman algorithm as an effective algorithm for solving optimization problems in [18]. Their main idea was to first transform a constraint optimization problem into a constraint-free problem by using the Bregman distance. This problem-solving algorithm is as follows: such that and are nonnegative convex functions of , and is a smooth nonnegative convex function in relation to for a given , and is a closed convex set.

The Bregman iterative algorithm is defined as follows by Osher et al. in [18].

Initialization:    and .
While “ not converge,” do,
.
.
.
End while.
3.3. The Convergence Theorem

In [18], the variant of Bregman was presented for TV-based image rendering. Other features of this iterative Bregman scheme, as well as the convergence analysis, have been proven in detail in [1820]. is the first iteration of this method. The residual term must be minimal to solve the initial problem; once the residual term converges, the Bregman iterative algorithm continues. Because of its excellent convergence features, the Bregman iterative algorithm has been applied to a variety of problems, including badly posed problems and image dissection. The following are some of these qualities: with noisy data, we can achieve convergence to the original image we are seeking to recover, as well as convergence in terms of Bregman distance to the original image and a monotonous decline in the residual term. We have where and learned a lot about image redaction during our research.

The technique generates a series that reduces in a monotonous manner.

Proposition 1. [19] There exists a monotonous decrease

Proposition 2. [19] The remaining residual terms converge to the smallest value of .
If and are minimized by , then

3.4. The Split Bregman Algorithm

Goldstein and Osher first proposed the split Bregman algorithm in [21] to handle more general form optimization problems: where is a closed convex set and and are the convex functions. This problem is the same as the stress minimization problem as follows:

Goldstein and Osher introduced the split Bregman algorithm, which was written as follows:

Initialization: .
While do,
.
.
.
.
End while.

The split Bregman algorithm is used to solve some of the most common form optimization problems:

Anisotropic and isotropic TV denoising problems are solved using the split Bregman method.

3.4.1. Anisotropic TV Denoising Problem

The problem of anisotropic TV denoising is considered in [19]. where is the noisy image, and will be noted by and , respectively. The problem is solved using a constraint equivalent to a problem .

We answer the problem as follows:

The split Bregman algorithm can be used to tackle this last problem:

We use

The Gauss-Seidel function is also useful.

Initialization: .
While do,
, where is the Gauss-Seidel function.
   .
   .
.
.
.
End while.
3.4.2. Isotropic TV Denoising Problem

The problem of isotropic TV denoising is considered in [19].

The problem is solved using a constraint equivalent problem :

To solve the problem , we solve the following problem without constraint:

The split Bregman algorithm can be used to tackle this last difficulty.

We give the following definitions:

Initialization: .
While do,
, where is the Gauss-Seidel function.
.
.
.
.
.
End while.
3.4.3. Combining Anisotropic and Isotropic TV Denoising Problems

We propose an image-denoising method by combining the anisotropic and isotropic TV denoising problem.

The problem is solved using a constraint equivalent to a problem :

To solve the problem , we solve the following problem without constraint:

In our next studies, we would like to program the method combining anisotropic and isotropic TV denoising in MATLAB and we compare denoising methods.

3.5. The Convergence Theorem
3.5.1. Anisotropic TV Denoising Algorithm

The following relationship is defined based on the various algorithms of split Bregman anisotropic [7, 19].

with

is the size of the matrix which is defined by with matrix identity , the Kronecker product of matrices and , and is a matrix defined by

Proposition 3. If and as a result, the iteration scheme converges towards its limit in a finite number of steps, for

Proof. In the case , this is the result. The evidence is divided into three categories: , , and .
In the case where , we have To put it another way, the number represents a fixed point in the iterative system. If is true, then for all . In other words, one iteration is sufficient to approach the iterative scheme’s limit. If , we have: The lowest integer that surpasses is represented by .
In step , the iterative system finds its limit. If is true, then .
In step , the scheme reaches its limit.

Case is comparable to that of case . The iterative pattern, in particular, converges in a finite number of steps to the limit .

Finally, we will look at example . The iterative pattern becomes in this example.

For every , we can see that if ; if ; if . As a result, the iteration to an end in a single step.

The convergence of the split Bregman anisotropic TV denoising algorithm was proved in [22]. As can be seen, we must solve a linear system according to the Goldstein-Osher split Bregman denoising.

The split Bregman denoising method is reduced to the Jia-Zhao denoising algorithm [23] as a result of this adjustment, which can be called the FP2O-ATV algorithm for .

In addition, both the Goldstein-Osher split Bregman algorithm and the Jia-Zhao denoising method make substantial use of the Bregman distance features and show signs of convergence.

In particular, the parameter that guarantees convergence of the Jia-Zhao denoising algorithm must be less than , whereas for the algorithm FP2O-ATV to converge, it is relaxed to a number less than , which is somewhat higher than .

3.5.2. Isotropic TV Denoising Algorithm

The following relationship is defined based on the various algorithms of the split Bregman isotropic [7, 19].

Proposition 4. If and as a result, the iterative scheme converges towards its limit in a finite number of steps, for

Proof. We will look at the scenario where . In this situation, the equation is reduced to . If , we have for all and for all if , as a result In the case of , we have , i.e., , where is the limit of the Picard iterations. If is a convex function on with then, and by Example 3 in the proximal operator, resulting in
The convergence of the split Bregman isotropic TV denoising algorithm has been proved in [22, 24]. The linear system must be resolved at each iteration of the split Bregman isotropic TV denoising algorithm, just as it must be resolved at each iteration of the split Bregman anisotropic TV denoising method. To reach an acceptable approximation of [21], another Gauss-Seidel iteration step was used. The examination of the convergence of the resulting iterative scheme does not apply if the linear problem is solved using Gauss-Seidel iteration steps.

4. An Overview of Relevant Recent Works and Methods in Image Processing

This work is an introduction to image restoration, which has an interesting ill-posed problem. It is important to improve the quality of the images. As noise damages images and reduces the accuracy and performance of processing tasks, there are many modern ways to remove noise. In this section, we will mention the relevant modern methods based on the ROF model to remove Gaussian noise. Below are some recommended works.

4.1. Rudin-Osher-Fatemi Model

A denoising model based on the first-order total variation was proposed by Rudin et al. The ROF model is the popular name of the model. The ROF model does a great work of removing Gaussian noise, but it does not preserve image structures well. Otherwise, it creates artifacts.

Let , , and be an original image, a restored image, and a noisy image, respectively, where is a pixel location, is an image domain, and and are the number of pixels by the image height and the image width. Rudin et al. proposed the ROF model to remove Gaussian noise as follows:

A parameter for regularization is . Data fidelity is the first term, while smoothness measured by total variance is the second term. The ROF model is used for solving the denoising problem based on total variation regularization (TV).

4.2. Method of Overlapping Group Sparsity and Second-Order Total Variation Regularization

In [25], we propose an image denoising method named OGS-SOTV by combining (a)Overlapping group sparsity total variation regularization OGS-TVwhere is an overlapping group sparsity functional with a group size of and it is defined in [25] as (b)The second-order total variation regularization SO-TV or TV2

A combined model with and can be considered as follows:

Assume and . So, we obtain the adaptive denoising model based on OGS-TV and SO-TV which are defined as follows: where is a balancing parameter of the noise removal term and artifact elimination term and is a regularization parameter. The numerical solution is obtained with ADMM (alternative direction method of multipliers) or the split Bregman method.

4.2.1. Purpose and Results of the Method

The purpose and results of the method are as follows: (i)performance of noise removal of OGS-TV(ii)performance of artifacts removed from TV-OS(iii)regularization estimation parameter is also proposed to implement the method automatically(iv)OGS-SOTV can remove noise effectively as well as eliminate artifacts(v)the OGS-TV, TV-OS, and OGS-SOTV removed noise and artifacts better than the ROF model.

4.3. Medical Image Denoising Methods

In [26], we consider the denoising problem with medical images produced by X-ray/CT imaging techniques. The images are corrupted by Poisson noise. Since the Poisson noise is dependent on the signal, we cannot control the intensity of the noise, so we use Gaussian noise.

We proposed a medical image denoising method, by combining (i)the total variation in the regularization of TV(ii)the Anscombe transformation

4.3.1. Implementation Method

(1)Based on the ROF model, Le et al. proposed the Poisson denoising problem; the model is well known as the modified ROF model (mROF)where is a regularization parameter; it depends on the restored image of every iteration step. This matter reduces the accuracy and performance of the evaluation process. (2)Anscombe transform is a mathematics tool for converting a Poisson data to standard Gaussian data , it has the following form: (3)Instead of solving the mROF problem, we can solve the ROF problem by obtaining a solution (4)Apply the inverse Anscombe transform to acquire the final denoised image

4.3.2. Purpose and Results

The purpose and results of the method are as follows: (i)the Anscombe transform is used to convert Gaussian noise in medical images into Poisson noise(ii)apply the ROF model that is based on TV to remove Gaussian noise(iii)the proposed method also gives a better denoising result than ROF or mROF by both visual result and restoration quality assessment metrics such as PSNR and SSIM

4.4. Adaptive Method for Image Restoration Based on TV1 and TV2

In [27], we propose an adaptive method for image restoration based on a combination of (i)the first-order total variation regularization (TV1) also known as the ROF model(ii)the second-order total variation regularization SO-TV or TV2 with

A combined model with and can be considered as follows:

The model is well known as the TV-bounded Hessian model (TV-BH).

Suppose and . So, we obtain the adaptive image restoration model based on TV1 and TV2 as follows: where is a balancing parameter between the (TV1) and (TV2) and is a regularization parameter. We choose the regularization parameter as based on the inverse gradient: where is a Gaussian kernel.

The numerical solution is obtained with ADMM (alternative direction method of multipliers) or the split Bregman.

4.4.1. Purpose and Results

The purpose and results of the method are as follows: (i)uses the advantages of noise removal and edge preservation of the ROF model that is based on TV1(ii)artifacts elimination of the second-order total variation TV2(iii)adaptive multiscale parameter estimation. If , artifacts are eliminated; if remove noise; and if to balance the performance of noise removal and artifact elimination(iv)experimental results indicate that the proposed method obtains better restorations in terms of visual quality as well as quantitatively by PSNR and SSIM

5. Numerical Results

We use several images; the introduced additive noise is Gaussian, and we attempt to recover the original image to test the split Bregman methods in the problem of anisotropic and isotropic TV denoising. Let be the matrices that depict an image of size . We then used MATLAB command to define our noise image , where sigma is a version of the Gaussian noise level.

Clarification with the sources of the image used in Figure 1, I photographed my daughter Iline, with my mobile phone, and then I used MATLAB for converted the color image to grayscale then conduct studies on the image Iline.

We used the values and and the tolerance in our studies.

The results of the anisotropic and isotropic TV denoising algorithms for various images are shown in Tables 1 and 2. The is used to calculate relative errors.

Tables 3 and 4 show the performance metrics for the anisotropic and isotropic TV denoising algorithms, with .

Tables 5 and 6 prove the for the anisotropic and isotropic TV denoising algorithms for a single “flower” image, as well as different sigma values.

Table 7 shows the SNR values for the “flower” image that has been detached using the prox-penality, anisotropic TV denoising, and isotropic TV denoising algorithms.

5.1. Comments on Experimental Results

(i)As seen in Tables 1 and 2, isotropic TV denoising is faster and more accurate than the anisotropic version of the denoising method. In our experiments, we discovered that the sequence of residues for the image “Iline” converged monotonously to a lower value than the original noisy image. However, due to the large size of this image, this takes a long time. The errors in the cameraman and university images increased monotonously and converged to a relative error point higher than the initial noisy image’s relative error. This can be seen in the images and visuals that have been presented(ii)In Tables 3 and 4, we evaluate the quality of images restored by the image restoration models using square error (MSE), signal noise rate (SNR), peak signal-to-noise ratio (PSNR), image quality index (IQI), normalized cross-correlation (NK), average difference (AD), structural content (SC), maximum difference (MD), and normalized absolute error (NAE)(iii)For the “flower” image of the isotropic and anisotropic TV denoising algorithms and the same segregation value of 0.08, Tables 5 and 6 illustrate the varied SNR results, number of iterations, relative error, and time(iv)The relative error of split Bregman, an iterative technique with anisotropic and isotropic filters, does not necessarily converge monotonously, as we have shown(v)The quality of images produced with the prox method, on the other hand, remains constant, like the prox Lagrange value increases, maintaining the texture’s performance after release(vi)Figures 13 illustrate the outcomes of restoration methods such as the TV anisotropic and TV isotropic images: cameraman, university, Iline, and for sigma equal to 0.08(vii)The results of errors applied to the TV anisotropic and TV isotropic renderings of the images are: cameraman, university, Iline, for a value of sigma equal to are shown in Figures 46(viii)Figures 7 and 8 illustrate the original image and noisy image of the flower for sigma equal to 0.08 and 0.501(ix)Figures 9 and 10 also show that the restoration algorithms, such as the TV anisotropic and TV isotropic denoising, are not reliable and have problems during the restoration processing. In other words, they diverge as white noise invariance (sigma) increases. However, this is not the case when using the proximal algorithms. The latter appears to be very old and reliable. It provides a high sigma variance for the deforested image in Table 7(x)The SNR of the prox algorithm-restored images is nearly constant

6. Conclusions

In this paper, we have compared the proximal penalty algorithms to solve a class of nondifferentiable optimization problems with the anisotropic TV and isotropic TV denoising algorithms for solving optimization problems. Based on the comparison of the restoration results from different related models, we can confirm that the prox algorithm suitable for image restoration produces the best high-quality results (clear, not smooth, and textures are kept), and the convergence method is guaranteed regardless of the SNR values if we compare it with other methods. Based on previous findings, we can conclude that the anisotropic TV and isotropic TV denoising algorithms work in a direct correlation relationship. In other words, regardless of how little the sigma value is, we get better and more old image quality results. The approach converges monotonously towards equal tolerance 10-5 despite the vast size of the image; it takes a long time to compute them, and the isotropic TV denoising algorithms are faster than the anisotropic TV denoising algorithms. In our tests, we discovered that the restored image is sharper and more accurate. The prox algorithm also gives better denoising results than TV that is anisotropic or isotropic, both in terms of the visual results and the restoration quality assessment metrics such as PSNR and SNR.

In our next studies, we would like to combine anisotropic and isotropic TV for image denoising to program the method in MATLAB, and we compared denoising methods.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.