- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Journal of Electrical and Computer Engineering
Volume 2013 (2013), Article ID 217021, 18 pages
Total Variation Regularization Algorithms for Images Corrupted with Different Noise Models: A Review
Department of Electrical Engineering, Pontifical Catholic University of Peru, San Miguel, Lima 32, Peru
Received 26 October 2012; Revised 17 May 2013; Accepted 9 June 2013
Academic Editor: Florian Luisier
Copyright © 2013 Paul Rodríguez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Total Variation (TV) regularization has evolved from an image denoising method for images corrupted with Gaussian noise into a more general technique for inverse problems such as deblurring, blind deconvolution, and inpainting, which also encompasses the Impulse, Poisson, Speckle, and mixed noise models. This paper focuses on giving a summary of the most relevant TV numerical algorithms for solving the restoration problem for grayscale/color images corrupted with several noise models, that is, Gaussian, Salt & Pepper, Poisson, and Speckle (Gamma) noise models as well as for the mixed noise scenarios, such the mixed Gaussian and impulse model. We also include the description of the maximum a posteriori (MAP) estimator for each model as well as a summary of general optimization procedures that are typically used to solve the TV problem.
Acquired digital images are subject to different kinds of noise [1, Chapter 7] depending on the hardware used for their acquisition which may involve additional degradations due to transmission errors or other external factors. The more common image noise models include the Gaussian, Impulse (e.g., Salt & Pepper), Poisson, and Speckle (e.g., Gamma) and the mixed Gaussian and impulse noise models.
While there are several algorithms that can be considered state of the art for a particular noise model, typically the adaptation of such specialized algorithms to handle other noise models has been proven to be either severely difficult or just plain impossible. It is in this regard that regularization methods stand atop due to their flexibility to use any given noise model; while there are several examples of such methods (Tikhonov regularization, Wavelet image restoration, sparsity based denoising and inversion, etc.), here we focus on the Total Variation (TV) regularization  method. The original TV regularization method targeted image denoising under Gaussian noise ; nevertheless it has evolved into a more general technique for inverse problems (see  for more specific details) while retaining its edge preserving property ( gives an extended analysis of TV's properties).
The TV regularized solution of the inverse problem involving data and observation operator (the identity in the case of denoising) is the minimum of the functional where is the data fidelity term, which depends on the noise model (for instance see (40), (42), (43), and (44)), the scalar is the regularization parameter, represents the total variation of solution , and the norm of vector is denoted by . Moreover, depending on the noise model a nonnegativity constraint may be imposed to (1).
While in Section 4 we will give a complete list of several types of numerical algorithms to solve (1), here we mention that such algorithms can be classified on those that need to solve a linear system of equations and those that do not. Usually the formers can handle the denoising as well as the deconvolution problems, whereas the latter can only handle the denoising problem. Moreover for the cases in which a nonnegativity constraint needs to be imposed to (1), such for the Poisson and Speckle noise models, numerical algorithms differ on how they handle the non-negativity constraint, which may add additional computational costs.
The outline of the paper is as follows: in Section 2 we describe the more common image noise models, where we also include a brief description on how to find the best estimator of an observed data corrupted with a known noise model using the criterion based on the MAP (maximum a posteriori) estimator. Next in Section 3, we provide a general review of optimization procedures, with a particular focus on the TV problem, where we also include a more detailed description of some general algorithms that usually do not appear on well-known textbooks. In Section 4 we summarize several TV numerical algorithms for each noise model (Gaussian, Impulse, Poisson, Gamma, and mixed Gaussian plus Impulse) considered in the present paper; we also include some details that differentiate the Total Variation formulation for grayscale from color (vector-valued) images. Numerical simulations and examples are presented throughout Section 5. Finally, concluding remarks are listed in Section 6.
2. Image Noise Models
In this section we will describe the more common image noise models including the Gaussian, Impulse, Poisson, Speckle (Gamma), and the mixed Gaussian and impulse noise models; nevertheless we will start by succinctly describing the basics on how to find the best estimator of an observed data corrupted with a known noise model using the criterion based on the MAP (maximum a posteriori) estimator (for a more detailed introduction to this topic, we recommend  and the many references therein).
We will assume that , , and are a 1-dimensional (column) or 1D vector that represents the original, observed, and reconstructed 2D grayscale or color images, respectively, obtained via any ordering (although the most reasonable choices are row major or column major), that is, for the grayscale case, , whereas for the case of a 2D color image it will be represented by the 1D (column) vector , where each with represents a grayscale image. Additionally, we will also consider an observation operator (see (1)) to be a matrix, which we will assume to be either the identity or to represent an observation operator (e.g., blur); furthermore for the case of color images, the observation operator is assumed to be decoupled; that is, is a diagonal block matrix with elements ; if is coupled (interchannel blur) due to channel crosstalk, it is possible to reduced it to a diagonal block matrix via a similarity transformation .
The MAP estimator assumes that prior noise model (in the observed image ) is known; then the joint probability distribution function for pixel , given by , will be described by the particular noise model that degrades the observed image; furthermore, if we assume that the noise affecting the (observed) image is independent, then we can write
Since the aim of any restoration procedure is to estimate conditioned on the knowledge of the observed data , Bayes' law directly leads to the description of the a posteriori conditional joint probability distribution function where the maximization of (3) will lead us to the classical MAP estimator; moreover, this maximization is equivalent to the minimization of . If we additionally assume that and are constant, then the MAP estimator can be found by minimizing .
In the next subsections we will use the previous description for each particular noise model to find its MAP estimator, which will be incorporated as the fidelity term in (1).
2.1. The Gaussian Noise Model
The additive Gaussian noise model, is the most frequently occurring noise model used to model thermal noise as well as the limiting behavior of other noise models (for details see the Central Limit theorem, [7, Chapter 8.4]); the model is given by where the forward linear operator models an observation operator which we will assume to be either the identity or to represent a blur operator and is a sample of a random vector of independent Gaussian variables, each with probability density function given by .
Assuming that and , it is straightforward to check that Using the MAP criterion (and since is a constant), then the estimator will be found by minimizing . Typically this MAP estimator is referred as the solution of the optimization problem given by This result is the classical and well-known direct approach for inverse filtering under additive Gaussian noise (see ).
We finish this subsection with a concisely list of general denoising methods under the additive Gaussian noise model, highlighting that  is a very good introduction to the classical methods for image restoration. Well-known restoration algorithms include the bilateral filter , Total Variation denoising , the SUSAN filter , Wavelet based denoising , nonlocal means , and so forth; several of the above mentioned algorithms are reviewed and compared in . Finally we mention that patch based approaches, which group 2-D images fragments (or patches) with similar characteristics, are widely considered to be the current leaders in terms of reconstruction quality for the denoising problem; for instance  (and derived variants), known as the BM3D algorithm, is a good example of such approaches.
2.2. The Impulse Noise Model
Degradations due to (wireless) transmission errors appearing during the acquisition of digital images are usually modeled as Impulse noise, which can be expressed as where arithmetic operations are to be considered as element-wise, is a sample drawn from an i.i.d. multivariate Bernoulli distribution with success probability , represents a vector with all elements equal to one (both and have the approppiate dimension), is either Salt & Pepper noise or random-valued impulse noise; for the former case, or with probability and , respectively, (), and for the latter case is drawn from an uniform distribution in .
Since Impulse noise (and Salt & Pepper noise in particular) is an example of heavy tailed noise (its probability density function or pdf approaches to zero slower than the Gaussian pdf) it is usually modeled as additive white Laplacian noise (in  it was noticed that Laplace noise has the heavy tail behavior associated with impulsive noise); in other words, the model described by (7) is approximated by , where is a sample of a random vector of independent Laplacian variables, each with probability density function given by ; as in the previous sub-section, by assuming that and , then it is easy to verify that Using the MAP criterion, the estimator can be found by solving the optimization problem given by
The use of the -norm (as in (9)) has attracted attention in the past decades [15–20] due to a number of advantages, including superior performance with impulse noise in scenarios ranging from detection to variational models.
There are several (nonvariational) approaches to denoise images corrupted with Salt & Pepper noise (as well as for the more general case of Impulse noise), here we succinctly mention the classical 2D median filter  and its variants, such the rank-order-based adaptive median filter  for the Salt & Pepper case (which uses two-phase approach: first detects the pixels corrupted with Salt & Pepper noise and then filters only those pixels), or the center-weighted median filter  for the random-valued impulse noise case. We also mention other methods that also use a two-phase approach: the progressive switching median filter , the filter based on the ranked over absolute difference statistics , the fuzzy impulse noise detection and reduction method , and the directional weighted median filter  among others.
2.3. The Poisson Noise Model
Fundamentally, almost every device for image acquisition is a photon counter: Positron Emission Tomography (PET), computed tomography (CT), radiography, CCD cameras, and so forth. When the count level is low, the resulting image is noisy; such noise is usually modeled via the nonhomogeneous Poisson (NHP) model, with the resulting noise being nonadditive and pixel-intensity dependent. Given , the noise-free image, we consider each (pixel of , observed image) to be an independent Poisson random variable such and ; formally, the Poisson noise is defined as , with and , which stresses the fact that Poisson noise is signal dependent; furthermore, this characteristic of the Poisson noise (that Poisson noise’s variance is signal dependent, and particularly for images, that its variance is pixel or spatial dependent) prevents from directly treating it as Gaussian noise (by invoking the Central Limit theorem, see [7, Chapter 8.4]) even when (value that represents the “count”) is not low, since the variance of such approximation would be spatially dependent, and thus be in contradiction with the standard Gaussian noise model (which assumes a constant variance).
Following a similar approach as in the previous sub-sections, we will assume that each pixel of the observed image is a sample of a random vector of independent Poisson variables, with a joint pdf given by where , models an observation operator (identity or blur), and is the unknown noise-free image, which is usually assumed to be nonnegative.
Using the MAP criterion, the estimator can found by solving the optimization problem given by where represents the th element of (i.e., if , then ). The minimization problem summarized by (11) was first presented in [28, 29], where the (now classical) Richardson-Lucy algorithm was introduced.
It must be noted that the functional (Poisson likelihood) is not Lipschitz continuous when (for any ) is close to zero: that is, it is no possible to find a constant such that if either any element of or is (close to) zero; therefore, any given algorithm that optimizes a cost functional that includes that the Poisson likelihood must take this consideration into account.
Finally, we mention that there are several methods to restore (denoise/deconvolve) non-negative images corrupted with Poisson noise (a comprehensive review is given in ) which includes the Richardson-Lucy algorithm and its regularized variants as well as Wavelet methods, Bayesian methods, and so forth. Additionally we also notice that there are methods [31–33] based on Variance-Stabilizing Transformations or VST (the Anscombe  or Freeman-Tukey  VST for this particular case): given a set or sequence of nonhomoskedastic random variables (random variables with different finite variances), with the same underlying distribution family, a VST is a specific distribution transformation that renders that set or sequence of random variables into an homoskedastic one (random variables with the same finite variance); moreover such transformation is chosen so the transformed random variables are Gaussian with unit variance. These methods ([31–33]) attain very high quality reconstruction results by using state-of-the-art denoising methods for the Gaussian noise model and then applying the inverse VST.
2.4. The Speckle (Gamma) Noise Model
For images acquired via ultrasound, SAR (synthetic aperture radar) or coherent light imaging systems (e.g., Fabry-Perot interferometer, etc.), Speckle noise is a critical factor that limits their visual perception and processing (see [36, 37] among others). Typically, the multiplicative model is considered, where (as in previous sub-sections) is the observation operator, is the noise-free data, and is the noise; also, it is assumed that . Moreover, in the Speckle noise model, it is considered that is a sample of a random vector of independent Gamma variables, each with pdf given by , with and , where for a positive value of .
The derivation of the MAP estimator is not as straightforward as for the previous noise models; on what follows we summarized the derivation of the MAP estimator as originally presented in , which leads to a nonconvex optimization problem.
We first reproduce Proposition 3.1 from  (for the proof, see the mentioned reference): assume that and are independent random variables with independent continuous pdfs and ; then for and where arithmetic operations are to be understood as element-wise; using this result, then MAP estimator can be computed by maximizing such maximization derives in the following non-convex optimization problem As for the Poisson noise model, it must be noted that the functional (Gamma likelihood) is not Lipschitz continuous when (for any ) is close to zero, and thus any given algorithm that optimizes a cost functional that includes the Gamma likelihood must take this consideration into account. In  the authors give a detailed proof of the existence and the uniqueness for this MAP estimator (15) when used within a variational model.
While in  an extensive list of despeckle filtering algorithms is provided, which includes methods based on linear filtering (first-order statistics, local statistics, etc.), nonlinear filtering (median, linear scaling, etc.), diffusion filtering (anisotropic diffusion, etc.), and wavelet filtering, we also mention that methods following a VST-like approach  can be also an effective alternative.
2.5. The Mixed Gaussian-Impulse Noise Model
As mentioned before, acquired digital images are frequently subject to additive Gaussian noise; if we then add the degradation due to transmission errors, the observed image will be corrupted by Gaussian with salt-and-pepper (constant-valued impulse) noise or random-valued impulse noise. The mixed Gaussian and impulse noise model is summarized by where we have the same considerations as for the Impulse noise model (see Section 2.2, in particular the considerations for (7)), and additionally is a sample of the random vector of independent Gaussian variables, each with probability density function given by .
The key idea of the methods that target this mixed noise model scenario is to use a two-phase approach: detect the outlier pixels before proceeding with the filtering phase, where the detected outliers (pixels corrupted with Impulse noise) and the rest of the pixels are filtered using a suitable (and sometimes independent) method for the particular noise model. For example (in the case of non-TV based methods),  introduced an universal noise removal filter that first detects the impulse corrupted pixels and estimates its local statistics and incorporate them into the bilateral filter , resulting in the trilateral filter. In  somewhat similar ideas were used to develop a similarity principle which in turn drives the weights of the “mixed noise filter”; the reconstruction performance (as reported in ) outperformed that of the trilateral filter.
Although all the methods that target (16) assume that , the set of outliers (pixels corrupted with impulse noise), is known; the fact is that must be estimated; for this purpose there are several options (for a performance comparison between several of the listed methods we refer the reader to ): the rank-order-based adaptive median filter , the progressive switching median filter , the filter based on the ranked over absolute difference statistics , the fuzzy impulse noise detection and reduction method , the center weighted median filter , and the directional weighted median filter  among others.
3. Optimization Algorithms: A Review from the Total Variation's Perspective
Optimization theory deals with methods to find the argument of a cost function that minimizes (maximizes) it, where the cost function is a quantitative measure of the performance of a given system. In this Section we focus on listing general optimization procedures that are usually used to solve the TV problem, giving references to well-known textbooks or succinctly describing such general optimization procedures. For a general introduction to the topic of optimization theory we recommend [42, 43] among other textbooks.
We first start this Section by reminding of the general TV problem (as described in (1)) and by stressing that a non-negativity constraint () may be imposed to (17); also, throughout this paper we will consider that the discrete version of is given by where (grayscale images case) or (or “red,” “green”, and “blue,” for the color images case; also note that could represent an arbitrary number of channels), the horizontal and vertical discrete derivative operators are denoted by and , respectively, and that the scalar operations are considered to be applied element-wise, so that, for example, . We also note that (18) is the generalization of TV regularization to color images (for other choices see [44, Chapter ]) with coupled channels (see [45, Section 9]), typically used within the color or vector-valued TV framework (first introduced in ), which coincides with the discrete version of for grayscale images ( in (18)), that is .
Due to the nondifferentiability of , it is sometimes replaced by the smooth approximation where and some sensible choice of function such or such the Huber function  We also mention that in other cases an anisotropic separable approximation of is preferred, that is (for instance, see ).
Since its original formulation , the development of numerical algorithms for solving (17), as well as its non-negativity version, has attracted considerable interest; while in Section 4 we will give a complete list of such algorithms, here we focus on briefly describing or listing (whenever already described in well-known textbooks) general optimization procedures that are usually used to solve (17). For instance in [49, Chapter 8] a complete description of several general optimization procedures for TV are given, such as the Steepest Descent (also referred as the artificial time marching approach, see [44, Chapter ]), the Newton method, the lagged fixed point iteration (originally described in [50, 51]), and the Primal-dual Newton method (originally described in ). Although all of these algorithms focus on the -TV case (Gaussian noise model), several of them have been used for other noise models (see Section 4). Similarly in , several numerical algorithms for TV are described; of particular interest is the dual formulation of the TV problem described in [44, Chapter ], which is a different approach than the Primal-dual Newton method (and was used in  to derive an effective algorithm).
The enforcement of a nonnegativity constraint, that is,: , for the solution of (17) is not only physically meaningful in most of the cases: images acquired by digital cameras, CT, and so forth; it also improves the quality of the reconstruction (see ). Nevertheless, the non-negativity constraint is seldom considered in the practice (unless explicitly needed by the noise model), since it makes a hard problem even harder. There are several well-known methods to attack this problem: for instance in [49, Chapter 9] very good descriptions are given for the gradient projection method (which can be understood as a generalization of the Steepest Descent method) and for the Projected Newton method (as well as its variants).
We finalize this Section by succinctly describing three general optimization procedures that have been used to solve the TV problem: the Iterative Reweighted Least Squares (IRLS), the Alternating Direction method of multipliers (ADMM) or Split-Bregman (SB), and the Nonnegative Quadratic Programming (NQP) method.
3.1. Iterative Reweighted Least Squares (IRLS)
Since its introduction  the IRLS method has been applied to a variety of optimization problems. Originally, the IRLS method was used to solve the minimization problem by iteratively approximating it by a weighted norm. At iteration the solution is the minimizer of , with weighting matrix . For this algorithm converges to the global minimizer , while for the definition of the weighting matrix must be modified to avoid numerical instability due to division by zero or by a very small value. A standard approach is to threshold elements of in constructing the corresponding elements of , but other choices are also possible [57, 58]. For it has been shown , within a sparse representation framework, that the IRLS algorithm not only converges but increases its convergence rate as goes to zero.
Without loss of generality, we will focus on the -TV  case for grayscale images: , where is the forward operator, is the observed noisy data, is a weighting factor controlling the relative importance of the data fidelity and regularization terms, and is the restored image data.
The key idea  is to express the regularization term by the quadratic approximation , where and isa identity matrix, and is the Kronecker product. The resulting iterations can be expressed in the form of the standard IRLS problem:
For a given current solution , the weighting matrix can be easily computed, and the threshold (to avoid numerical instability due to division by zero or by a very small value) may be automatically adapted to the input image [59, Section ]. Finally, the resulting algorithm has to iteratively solve the linear system which is its most computationally demanding part. The same strategy can be used to solve all other noise models within the TV framework, including the vector-valued (color) TV.
Moreover, for the particular case of denoising ( in (22)) the solution of the linear system described by (23) is given by where ; if we now apply the (well-known) matrix inversion lemma to (24), we get It is important to notice that can be computed directly (no division by zero); furthermore, by solving We can compute via This approach ((26) and (27)) was first proposed, within the Total Variation framework (particularly for the -TV denoising case), in .
3.2. Alternating Direction Method of Multipliers (ADMMs) or Split Bregman (SB)
Alternating minimization methods have become popular in the past few years due to their ability to solve regularized problems, that is, , where is the regularization term (being (17) a particular example with ), in a simple and computationally efficient fashion. Although there are several incarnations of these methods , we focus on the Split-Bregman (SB) [62, 63] algorithm, while noting that it is now recognized that the SB algorithm is equivalent to the older Alternating Direction Method of Multipliers (ADMM) [64, 65] algorithm.
The key idea of the SB method [62, 63] is to introduce an auxiliary variable to modify the original cost functional (17) so that it can be iteratively minimized using simple steps per iteration; this is done by first considering the following constrain optimization problem the standard Lagrangian of (28) will lead us to ; nevertheless the augmented Lagrangian , is preferred since it gives a more robust framework (see [61, Section 2.3]). Furthermore, by setting and noting that , with , we can write (29) as which can be iteratively solved by considering the following alternating optimization problems: It is important to highlight some observations regarding the previous optimization problems: assuming that is quadratic then (31) is a generalized Tikhonov problem step, which is simple to solve; if can be expressed as the norm of , then (32) can be solved via shrinkage (in some cases the generalized shrinkage is needed, see [61, Section 4.1]).
It is also important to note that the SB (or ADMM) method can also be used to solve the non-negative quadratic optimization (see (35)) or to solve the optimization problems derived from the Poisson and Speckle noise models (see Sections 2.3 and 2.4) using a similar approach (as described in the previous paragraphs); for a complete description we refer the reader to [61, Section 5.2], [66, 67], respectively.
The general SB (or ADMM) algorithm can be easily adapted to handle isotropic TV (, that should be replaced by its appropriate discrete version for grayscale and vector-valued images); as an example we succinctly focus on the -TV  case for grayscale images: , where the auxiliary variable is used along with the constrains and ; among other operations, such as generalized shrinkage (see [63, Section 4.1]) and auxiliary vector updates (which are not computationally demanding) use to solve the particular optimization problems (related to the general ones, i.e., (32) and (33)), the SB-TV algorithm has to solve the linear system Note that the left-hand side (LHS) of (34) is constant across different iterations while its right-hand side (RHS) changes at each iteration; the opposite is true for the resulting linear system in (23). Furthermore, since LHS of (34) is constant across iterations (and strictly diagonally dominant in the practice), it seems natural (as suggested in ) to use the Gauss-Seidel method to solve (34); furthermore, when or is a circulant (diagonalizable by the Fourier transform) matrix, (34) may also be solved in the Fourier or DCT domain.
One key computational difference between the SB-TV and the IRLS based algorithm is that even though the number of floating-point operations for each SB-TV iteration is slightly smaller than that of each IRLS-TV iteration, typically the number of global iterations to attain good reconstruction results for the latter algorithm is less than for the former (see  for a detailed analysis).
3.3. Nonnegative Quadratic Programming (NQP)
Recently  an interesting and quite simple algorithm has been proposed to solve the NQP problem: where the matrix is assumed to be symmetric and positive defined, and is some positive constant. The multiplicative updates for the NQP are summarized as follows (see  for details on derivation and convergence): where , , and all algebraic operations in (37) are to be carried out element-wise. One key observation to consider is that this algorithm cannot be initialized with zeros. Interestingly, once an element has been zeroed by the multiplicative updates it remains zero under successive updates; this property is specially appealing when this algorithm is applied to sparse representation problems. Furthermore, the NQP is quite efficient and has been used to solve interesting problems such as statistical learning  and compressive sensing  among others.
We finalize this sub-section by noting that the constraint problem, could be iteratively solved using an IRLS type approach that benefits from the NQP algorithm, by approximating (38) with and by setting and in (37).
4. Numerical Algorithms for TV
In the following sub-sections we will summarize a complete list of TV numerical algorithms for each noise model; we will also include the particular cost functional that is targeted in each case, highlighting particular cases such the methods that adapt the regularization parameter, either in a global or local fashion.
4.1. TV Numerical Algorithms for the Gaussian Noise Model
The minimization of the cost functional, is usually referred to as the -TV solution. Since its original formulation  a large number of algorithms have been proposed to specifically solve (40); generally speaking these algorithms can be classified based on the necessity (or not) to solve a linear system of equations.
For instance algorithms based on a dual formulation of (40) do not need to solve a linear system of equations and are usually computational efficient, although they lack the ability to handle a nontrivial forward (observation) operator in (40); the Chambolle  and the Aujol's -TV approximation (or A2BC model)  algorithms are examples of these type of methods.
On the other hand, methods that are based on the Euler-Lagrange equations (derived from the direct optimization of (40)) usually use a smooth approximation of (see Section 3) and need to solve a linear system of equations, which is usually carried out via conjugate gradient (CG), the preconditioned CG (PCG), or via the Fourier transform (to solve the linear system when the forward operator is equivalent to a circular convolution). For example, we mention that in  the authors used the steepest descent (artificial time marching) algorithm along with a line-search to solve (40) (the computational performance of the resulting method is very slow compare to all others); more elaborated algorithms include the primal-dual method  (which although uses a dual formulations, it does need to solve a set of linear equations), the lagged diffusivity algorithm [50, 51], the majorization-minimization or MM method [72, 73] (which can be considered a generalization of the Expectation-Maximization, or EM algorithm ), as well as its variant  for the denoising case that uses the matrix inversion lemma (see (26) and (27)), the TwIST method  (which is similar to MM method, previously mentioned and uses a predetermined PCG splitting method), and the Iteratively Reweighted Norm (IRN)  (which can be understood as an IRLS method). We also mention the FTVd method [62, 76] (which is a FFT based algorithm) and the linear programming via interior point method  (for which the linear system to be solved has twice as many variables as the original problem) and the Newton's method .
4.1.1. Vector-Valued -TV Algorithms
The TV minimization scheme for deblurring color images was first introduced in , and since then several alternatives have been proposed: in  the authors proposed an algorithm that considered the HSV color model, in  an algorithm based on a dual formulation (similar to Chambolle's ) was proposed (only the denoising case was considered),  presented an extension to  with a dual/alternate algorithm, in  the IRN algorithm  was modified to handle vector-valued images (this algorithm is related to the IRLS algorithm for vector-valued datasets, see ), and in  the FTVd algorithm [62, 76] was extended to handle vector-valued images. Finally we also mention that a direct extension of , called vectorial lagged diffusivity, was used in  for comparison purposes.
4.1.2. Nonnegative -TV Algorithms
For scalar (grayscale) images, only a handful of numerical algorithms, for example, [49, Chapter 9] and more recently  (non-negativity is handled via the NQP algorithm, see Section 3.3),  (non-negativity is handled via a variant of the Primal-dual Newton method, [49, 52]),  (non-negativity is handled via combination of a gradient based approach along with an iterative shrinkage/thresholding algorithm), and  (non-negativity is handled via the SB algorithm, see Section 3.2), include a non-negativity constraint on the solution of the -TV problem (40), and to the best of our knowledge, for vector-valued (color) images only  (non-negativity is handled via the NQP algorithm, see Section 3.3) explicitly includes the non-negativity constraint within the -TV framework.
4.1.3. Adaptation of the Regularization Parameter for -TV
Several methods have been proposed to adapt the regularization parameter for -TV; they differ on how the amount of noise is estimated and on the nature of the regularization parameter which can be a scalar (global) value or a spatially dependent (diagonal matrix) set of regularization parameters. For the latter case, the regularization parameter is moved from the regularization term into the fidelity term; moreover if we consider , then (40) can be written (for the denoising case) as where will be updated for each pixel (independently) in an iteratively fashion.
Succinctly we mention that for the -TV case in  two adaptive regularization schemes were presented to spatially update the regularization parameters. Also in  the Stein's Unbiased Risk Estimate (SURE) was used to estimate the mean square error between the observed image and the denoised one; a global regularization parameter was used. In  an extension to the Unbiased Predictive Risk Estimator (UPRE) was used to estimate the global regularization parameter as well.
4.2. TV Numerical Algorithms for the Salt & Pepper Noise Model
TV for the Salt & Pepper noise model is usually referred as -TV; in this case the minimizer of the cost functional, is the solution of the -TV problem. The use of the for the fidelity term in (42) was a significant development [16, 17, 19, 20] for TV based restoration methods and attracted attention due to a number of advantages, including superior performance with impulse noise .
The numerical algorithms that solve (42) are based on a variety of methods (although they can also be loosely classified between methods that need to solve a linear system and methods that need not): for instance in [20, 91] a smooth approximation of the norm was used, along with the steepest descent algorithm (the computational performance of this algorithm is slow compared to all other options); in  a Markov random field model was used, where the resulting algorithm, that used the anisotropic separable approximation, operated over integer-valued images and did not need to solve a linear system of equations; moreover, similar ideas were used in [93–95] (which are related to an earlier algorithm ) which yielded to a computational efficient algorithm (specially [93, 94]),  also uses the anisotropic separable approximation, and does not need to solve a linear system of equations nevertheless; it introduces an auxiliary variable (similar to the core idea of ADDM/SB, see Section 3.2) resulting in an alternating algorithm; a second-order cone programming approach was proposed in , although it had high memory requirements and needed to solve a linear system of equations; in  the -TV problem was solved using (linear programming) interior point method; finally we mention that the IRN algorithm  (IRLS based) can also handle the -TV problem (both denoising and deconvolution) and was a computational efficient alternative to methods based on the Markov random field model for the denoising case.
4.2.1. Adaptation of the Regularization Parameter for -TV
The original -TV problem (42) features a single regularization parameter (), which influences the entire pixel set, and has a direct impact on the quality of the reconstructed data. Ideally, for the salt-and-pepper noise model, noise-free pixels should preserve their values in the reconstructed (denoised) image. However, the use of a global parameter forces the entire pixel set to be penalized, which results in an inaccurate reconstruction. To the best of our knowledge, [41, 98–100] are the only published papers that tackle the above-mentioned shortcomings for the -TV problem.
All the above-mentioned algorithms (in a way or another) minimize the cost functional , where represents a diagonal matrix, whose entries are set to zero if the pixel is declared noise-free. Nevertheless, whereas in  a set of corrupted pixel candidates were estimated (via ) to then proceed to solve the -TV problem only for the corrupted candidates, being the regularization parameter hand-picked, in  and in [41, 100] a scheme was proposed to spatially adapt the regularization parameter; for the former, such adaptation was based only on local statistics and on the estimation of the noise level (although all pixels are still regularized); for the latter, which also estimated the set of corrupted pixel candidates via , the adaptation was initially based on similar ideas , however it was then proved  that the scheme that obtains the best results (reconstruction quality) is such that the nonzero entries of should be increased at each iteration.
4.3. TV Numerical Algorithms for the Poisson Noise Model
The minimization of the TV regularized Poisson log-likelihood cost functional [66, 101–108], summarized by , where has been successfully employed to restore non-negative images corrupted with Poisson noise from a number of applications, such as PET images , confocal microscopy images , and others.
Within the TV framework, numerical algorithms that solve (43) can be loosely classified as those that use a second-order Taylor approximation of the data fidelity term to make (43) a tractable problem and those that use other alternative approaches.
For the latter case, we mention that in  an EM algorithm along with a smooth approximation of was proposed to solve the Poisson-TV problem; in  a multiplicative gradient based algorithm (equivalent to the penalized EM algorithm) was used; a multilevel algorithm was used in  to solve a modified version of (43); in  the authors used the regularized loss minimization (a particular case of the ADMM algorithm, see [61, Section 6.3]) to minimize (43); in  two alternative variational methods were used (called L2-L2Log and TV-log) which can be understood as a change of variable () in (43).
Algorithms that do use a second-order Taylor approximation of the data fidelity term need to explicitly address the problem of it () being not Lipschitz continuous when any element of is (close to) zero (see Section 2.3). This issue is typically addressed by using a modified version of the second order Taylor approximation or by adding a constant to the observed data, and although the exact original problem is not solved (due to the approximations involved) the reconstruction (numerical) results have proven to be competitive. Moreover, this type of algorithms need solve a linear system of equations and typically differ in the constrained optimization algorithm used to carry out the actual minimization and in whether the true Hessian of or an approximation is used. For instance, in  a linear approximation of the Hessian is used along with a quasi-Newton optimization method, in  the minimization was carried out via a nonnegatively constrained, projected quasi-Newton minimization algorithm; was used. In  a constrained TV algorithm, described in , plus the approximation was used. In  a Expectation-Maximization TV approach was used and only the denoising problem was addressed. In  the NQP algorithm was used to optimize (43) for both grayscale and color images; was used.
4.4. TV Numerical Algorithms for the Speckle (Gamma) Noise Model
The minimization of the non-convex TV multiplicative model (introduced in ) summarized by , where is not the only alternative within the TV framework to restore images corrupted with Speckle noise; here we mention that  (the first method within the TV framework), used a constrained optimization approach with two Lagrange multipliers; the denoising and deconvolution problems were addressed; additionally in  the multiplicative model was converted into an additive one and used a multigrid algorithm to solve the resulting Euler-Lagrange equation; Also in  the multiplicative model was converted into an additive one and used the SB (or ADMM) algorithm to solve the optimization problem; only the denoising problem was addressed. Moreover, a framework based on MRF with levelable priors for restoration of images corrupted by Gaussian or Speckle (Rayleigh) was proposed in  where only the denoising problem was addressed, and in  a hard thresholding of the curvelet transform of the log-image followed by a -TV in the log-image domain was used; only the denoising problem was addressed.
Regarding the methods that use (44) within the TV framework, we first mention that  introduced such data fidelity term, which was derived using the MAP criterion; moreover, a detailed mathematical study of (44) was also carried in [38, Section 4] to address several issues, including the problem of (44) being not Lipschitz continuous (see Section 2.4): the unique solution to (44) is such that all its elements are strictly greater than zero. This result was reached without any assumption on the underlying optimization procedure to solve (44); furthermore, as to give numerical evidence on such result, in  an artificial time marching approach (steepest descent) was used to numerically solve the resulting Euler-Lagrange equation from (44); the denoising and deconvolution problems were addressed. A general TV formulation was proposed in  which included several models ([38, 110, 112]) as special cases; it also replaced the regularizer TV() by TV(log ); only the denoising problem was addressed. In  the non-convex model introduced in  was augmented with the Weberized TV (a term of the form , see also ) as an extra regularizer and solved the Euler-Lagrange equation via a fixed-point iteration; only the denoising problem was addressed. In  a second order Taylor approximation of the data fidelity term in (44) was used along with the NQP algorithm.
4.5. TV Numerical Algorithms for the Mixed Gaussian-Impulse Noise Model
A number of algorithms have recently been proposed for denoising of images subject to the mixed noise model (16), reproduced here for convenience we also recall that the key idea of the non-TV based methods is to use a two-phase approach: detect the outlier pixels before proceeding with the filtering phase.
Within the TV framework, most of the algorithms also start with an outlier detection pre-processing phase followed by a denoising phase. The approach in  was based on two augmented cost functionals ([119, equations -]) which had to be chosen depending on the noise characteristics; the reconstruction performance was competitive for grayscale images, but its computational performance was extremely poor. In  the computational performance of  was improved, although it still was highly dependent on the noise level, specially for high noise levels; In  an - minimization approach was proposed for grayscale and color images, resulting in a three-phase algorithm; reconstruction quality results and computational performance were quite good when compared with published works (and could be considered state-of-the-art), but the proposed cost functional ([121, Equation ]) is complicated and has several regularization parameters which have to be hand-picked, and the use of a dictionary learning second phase affects the overall computational performance. In  a spatially adaptive algorithm was proposed, along with the cost functional (minimized via an IRLS based algorithm) to address the denoising problem; the computational performance of this algorithm greatly exceeded that of [120, 121] while its reconstruction quality results can be place somewhere in between those reported by [120, 121].
5. Numerical Simulations and Examples
The aim of this Section is to provide practical examples of restoration results that can be obtained with TV for different noise models, for which we have used several (grayscale and color) images from  as test images, which are scaled between and before blurring (by a out-of-focus kernel); afterwards, they are corrupted with noise (models described in Section 2). To assess the reconstruction quality of the restored images the SNR (all cases) and SSIM (only for grayscale images)  metrics are reported.
Results presented throughout this Section may be reproduced using the , a Matlab-only implementation of IRLS based algorithms for TV with different fidelity terms. Since it is not the aim of this paper to present a computational performance comparison between the many TV numerical algorithms (for all the considered different noise models) here we will only report the time (computational) performance of  for completeness' sake and to offer the reader a rough idea of the processing time for each case (elapsed for Matlab, version R2011a, running on a 2.20 GHz Intel core i7 CPU laptop, with 6144 K of L2 memory and 6 G of RAM). For the readers interested in a thoroughly performance comparison between the many TV numerical algorithms we recommend to review the performance reports listed in the bibliography referenced throughout Section 4.
For the -TV case we choose to present simulations where we compare the reconstruction performance results of the (standard) -TV (Section 4.1) with the nonnegative -TV (Section 4.1.2) for the deconvolution case (see Figure 1). Reconstruction SNR values and computation times are compared in Table 1; The nonnegative -TV method has a better results, in terms of visual quality although SNR/SSIM metrics are about the same, than the (standard) -TV method, although the computational performance for the latter is about 2 to 3 times faster.
For the -TV case we choose to present simulations where we compare the reconstruction performance results of the (standard) -TV (Section 4.2) with the adaptive -TV (Section 4.2.1) for the denoising case (see Figure 2). Reconstruction SNR and SSIM values and computation times are compared in Table 2; The adaptive -TV method has significantly better reconstruction results, both in terms of SNR, SSIM and visual quality than the (standard) -TV; there is, however, a trade-off to be made for the superior reconstruction quality: the first step of the adaptive -TV, that is the estimation of the set of pixels corrupted with Salt & Pepper noise, is very slow compared to the processing time needed to solve the -TV problem. Here we want to highlight that although we only report the performance for the grayscale “Boats” and color “Barbara” images, these results are representative for all other test images.
For the Poisson-TV (Section 4.3) method, we choose to present simulations for the denoising and deconvolution cases. We use the grayscale “Cameraman” and color “Peppers” images (both ) which were first scaled to a maximum value (the lower the value of is, the noisier the image will be perceived), and then were blurred (“Peppers” only), and finally the Poisson noise was added (this matches typical experimental setups, such [66, Section 6.3] and [108, Section 4]). In Table 3 (see also Figure 3) we summarize the restoration results for the Poisson-TV method.
Similarly, for the Gamma-TV (Section 4.4) method, we choose to present simulations for the denoising and deconvolution cases. We use the grayscale “Tank” and color “Lena” images (both ) which were corrupted with Speckle noise, generated according to (12) with (note that the lower the value of is, the noisier the image will be perceived, only “Lena” was blurred). This matches typical experimental setups, such as [111, 115, 117]. In Table 4 (see also Figure 4) we summarize the restoration results for the Gamma-TV method.
Finally, for the mixed model TV (Section 4.5) method, we choose to present simulations for three denoising cases, where in one case the grayscale “Peppers” image () was corrupted with Gaussian plus random-valued impulse noise generated according to (45) with and , and in the other two, color “Lena” image () was corrupted with Gaussian plus Sal & Pepper noise as well as with Gaussian plus random-valued impulse noise with and , and and , respectively (according to (45)), see Figure 5. Additionally, in Table 5 we summarize the restoration results for the mixed model TV method for a broader combination of levels of Gaussian and Impulse noises that matches typical experimental setups [118, 120, 121].
To the best of our knowledge, we have provided a complete summary of recent TV algorithms for different noise models, that is, Gaussian, Impulse (e.g., Salt & Pepper), Poisson, and Speckle (e.g., Gamma) and the mixed Gaussian and impulse noise models. Although for some noise models, there are particular algorithms that have better reconstruction performance (e.g., patch-based approaches for the Gaussian noise model) we expect that in the coming years TV will improve its current reconstruction performance (and hopefully become competitive with state-of-the-art alternative denoising methods); moreover, we believe that the development of new TV algorithms that spatially adapt the regularization parameter is a possible answer (see the computational results for the adaptive -TV case) to current reconstruction performance shortcomings when compared to patch-based approaches.
Finally, it is important to highlight the flexibility of the TV method, that allows it to be relevant to applications that come from a variety of fields, ranging from astronomical to SAR based systems, including medical applications such as Ultrasound, PET, and others.
- A. C. Bovik, The Essential Guide To Image Processing, Academic Press, Boston, Mass, USA, 2009.
- L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1–4, pp. 259–268, 1992.
- T. Chan, S. Esedoglu, F. Park, and A. Yip, “Recent developments in total variation image restoration,” in Handbook of Mathematical Models in Computer Vision, Springer, New York, NY, USA, 2005.
- D. Strong and T. Chan, “Edge-preserving and scale-dependent properties of total variation regularization,” Inverse Problems, vol. 19, no. 6, pp. S165–S187, 2003.
- P. Kvam and B. Vidakovic, Nonparametric Statistics with Applications to Science and Engineering, Wiley-Interscience, New York, NY, USA, 2007.
- N. P. Galatsanos, A. K. Katsaggelos, R. T. Chin, and A. D. Hillery, “Least squares restoration of multichannel images,” IEEE Transactions on Signal Processing, vol. 39, no. 10, pp. 2222–2236, 1991.
- A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, NY, USA, 3rd edition, 1991.
- M. Lindenbaum, M. Fischer, and A. Bruckstein, “On Gabor's contribution to image enhancement,” Pattern Recognition, vol. 27, no. 1, pp. 1–8, 1994.
- M. R. Banham and A. K. Katsaggelos, “Digital image restoration,” IEEE Signal Processing Magazine, vol. 14, no. 2, pp. 24–41, 1997.
- C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE 6th International Conference on Computer Vision, pp. 839–846, January 1998.
- S. M. Smith and J. M. Brady, “SUSAN—a new approach to low level image processing,” International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
- D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995.
- A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
- R. J. Marks, G. L. Wise, D. G. Haldeman, and J. L. Whited, “Detection in laplace noise,” IEEE Transactions on Aerospace and Electronic Systems, vol. 14, no. 6, pp. 866–872, 1978.
- S. Alliney, “Digital filters as absolute norm regularizers,” IEEE Transactions on Signal Processing, vol. 40, no. 6, pp. 1548–1562, 1992.
- S. Alliney, “A property of the minimum vectors of a regularizing functional defined by means of the absolute norm,” IEEE Transactions on Signal Processing, vol. 45, no. 4, pp. 913–917, 1997.
- S. Alliney and S. A. Ruzinsky, “Algorithm for the minimization of mixed l1 and l2 norms with application to Bayesian estimation,” IEEE Transactions on Signal Processing, vol. 42, no. 3, pp. 618–627, 1994.
- M. Nikolova, “Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processing of outliers,” SIAM Journal on Numerical Analysis, vol. 40, no. 3, pp. 965–994, 2002.
- T. F. Chan and S. Esedoglu, “Aspects of total variation regularized L1 function approximation,” SIAM Journal on Applied Mathematics, vol. 65, no. 5, pp. 1817–1837, 2005.
- J. Tukey, “Nonlinear (nonsuperimposable) methods for smoothing data,” in Proceedings of the IEEE Electronics and Aerospace Conference (EASCON '74), p. 673, Conference Records, 1974.
- H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Transactions on Image Processing, vol. 4, no. 4, pp. 499–502, 1995.
- S.-J. Ko and Y. H. Lee, “Center weighted median filters and their applications to image enhancement,” IEEE Transactions on Circuits and Systems, vol. 38, no. 9, pp. 984–993, 1991.
- Z. Wang and D. Zhang, “Progressive switching median filter for the removal of impulse noise from highly corrupted images,” IEEE Transactions on Circuits and Systems II, vol. 46, no. 1, pp. 78–80, 1999.
- R. Garnett, T. Huegerich, C. Chui, and W. He, “A universal noise removal algorithm with an impulse detector,” IEEE Transactions on Image Processing, vol. 14, no. 11, pp. 1747–1754, 2005.
- S. Schulte, M. Nachtegael, V. de Witte, D. van der Weken, and E. E. Kerre, “A fuzzy impulse noise detection and reduction method,” IEEE Transactions on Image Processing, vol. 15, no. 5, pp. 1153–1162, 2006.
- Y. Dong and S. Xu, “A new directional weighted median filter for removal of random-valued impulse noise,” IEEE Signal Processing Letters, vol. 14, no. 3, pp. 193–196, 2007.
- W. Richardson, “Bayesian-based iterative method of image restoration,” Journal of the Optical Society of America, vol. 62, no. 1, pp. 55–59, 1972.
- L. Lucy, “An iterative technique for the rectification of observed distributions,” The Astronomical Journal, vol. 79, p. 745, 1974.
- J. Starck and F. Murtagh, Astronomical Image and Data Analysis (Astronomy and Astrophysics Library), Springer, New York, NY, USA, 2006.
- B. Zhang, J. M. Fadili, and J.-L. Starck, “Wavelets, ridgelets, and curvelets for poisson noise removal,” IEEE Transactions on Image Processing, vol. 17, no. 7, pp. 1093–1108, 2008.
- F.-X. Dupe, J. M. Fadili, and J.-L. Starck, “A proximal iteration for deconvolving Poisson noisy images using sparse representations,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 310–321, 2009.
- M. Mäkitalo and A. Foi, “Optimal inversion of the anscombe transformation in low-count poisson image denoising,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 99–109, 2011.
- F. Anscombe, “The transformation of poisson, binomial and negativebinomial data,” Biometrika, vol. 35, no. 3-4, pp. 246–254, 1948.
- M. Freeman and J. Tukey, “Transformations related to the angular and the square root,” The Annals of Mathematical Statistics, vol. 21, no. 4, pp. 607–611, 1950.
- J. W. Goodman, “Some fundamental properties of speckle,” Journal of the Optical Society of America, vol. 66, no. 11, pp. 1145–1150, 1976.
- C. Loizou and C. Pattichis, Despeckle Filtering Algorithms and Software For Ultrasound Imaging, Synthesis Lectures on Algorithms and Software in Engineering, Morgan & Claypool Publishers, 2008.
- G. Aubertt and J.-F. Aujol, “A variational approach to removing multiplicative noise,” SIAM Journal on Applied Mathematics, vol. 68, no. 4, pp. 925–946, 2008.
- C. Sheng, Y. Xin, Y. Liping, and S. Kun, “Total variation-based speckle reduction using multi-grid algorithm for ultrasound images,” in Image Analysis and Processing ICIAP 2005, vol. 3617, pp. 245–252, Springer, Berlin, Germany, 2005.
- B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing mixed noises,” Science China Information Sciences, vol. 54, no. 1, pp. 51–59, 2011.
- R. Rojas, Automatic regularization parameter selection for the total variation mixed noise image restoration framework [M.S. thesis], Pontifical Catholic University of Peru (PUCP), 2012.
- J. Nocedal and S. Wright, Numerical Optimization, Springer, New York, NY, USA, 2nd edition, 2006.
- D. G. Luenberge and Y. Ye, Linear and Nonlinear Programming, Springer, New York, NY, USA, 3rd edition, 2008.
- T. F. Chan and J. Shen, Image Processing and Analysis: Variational Pde, Wavelet, and Stochastic Methods, SIAM, 2005.
- A. Bonnet, “On the regularity of edges in image segmentation,” Annales de L’Institut Henri Poincaré C, vol. 13, no. 4, pp. 485–528, 1996.
- P. Blomgren and T. F. Chan, “Color TV: total variation methods for restoration of vector-valued images,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 304–309, 1998.
- P. Huber, “Robust regression: asymptotics, conjectures and monte carlo,” The Annals of Statistics, vol. 1, no. 5, pp. 799–821, 1973.
- Y. Li and F. Santosa, “A computational algorithm for minimizing total variation in image restoration,” IEEE Transactions on Image Processing, vol. 5, no. 6, pp. 987–995, 1996.
- C. Vogel, Computational Methods For Inverse Problems, SIAM, 2002.
- C. R. Vogel and M. E. Oman, “Iterative methods for total variation denoising,” SIAM Journal on Scientific Computing, vol. 17, no. 1, pp. 227–238, 1996.
- C. R. Vogel and M. E. Oman, “Fast, robust total variation-based reconstruction of noisy, blurred images,” IEEE Transactions on Image Processing, vol. 7, no. 6, pp. 813–824, 1998.
- T. F. Chan, G. H. Golub, and P. Mulet, “Nonlinear primal-dual method for total variation-based image restoration,” SIAM Journal on Scientific Computing, vol. 20, no. 6, pp. 1964–1977, 1999.
- A. Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 89–97, 2004.
- D. Krishnan, P. Lin, and A. M. Yip, “A primal-dual active-set method for non-negativity constrained total variation deblurring problems,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766–2777, 2007.
- A. E. Beaton and J. W. Tukey, “The fitting of power series, meaning polynomials illustrated on band-spectroscopic data,” Technometrics, vol. 16, no. 2, pp. 147–185, 1974.
- R. Byrd and D. Payne, “Convergence of the iteratively reweighted least squares algorithm for robust regression,” Tech. Rep. 313, Johns Hopkins University, 1979.
- S. A. Ruzinsky and E. T. Olsen, “L1 and L∞ minimization via a variant of Karmarkar's algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 2, pp. 245–253, 1989.
- I. Daubechies, R. DeVore, M. Fornasier, and C. G. Güntürk, “Iteratively reweighted least squares minimization for sparse recovery,” Communications on Pure and Applied Mathematics, vol. 63, no. 1, pp. 1–38, 2010.
- P. Rodríguez and B. Wohlberg, “Efficient minimization method for a generalized total variation functional,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 322–332, 2009.
- M. A. T. Figueiredo, J. B. Dias, J. P. Oliveira, and R. D. Nowak, “On total variation denoising: a new majorization-minimization algorithm and an experimental comparison with wavalet denoising,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2633–2636, Atlanta, GA, USA, October 2006.
- S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010.
- Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.
- T. Goldstein and S. Osher, “The Split Bregman method for L1-egularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009.
- D. Gabay and B. Mercier, “A dual algorithm for the solution of nonlinear variational problems via finite element approximation,” Computers and Mathematics with Applications, vol. 2, no. 1, pp. 17–40, 1976.
- R. Glowinski and A. Marrocco, “Sur l’approximation par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de dirichlet non lináires,” Revue Française d'automatique, Informatique, Recherche Opérationnelle, vol. 9, no. 2, pp. 41–72, 1975.
- M. A. T. Figueiredo and J. M. Bioucas-Dias, “Restoration of poissonian images using alternating direction optimization,” IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3133–3145, 2010.
- J. M. Bioucas-Dias and M. A. T. Figueiredo, “Multiplicative noise removal using variable splitting and constrained optimization,” IEEE Transactions on Image Processing, vol. 19, no. 7, pp. 1720–1730, 2010.
- P. Rodriguez and B. Wohlberg, “A comparison of the computational performance of iteratively reweighted least squares and alternating minimization algorithms for inverse problems,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '12), pp. 3069–3072, 2012.
- F. Sha, Y. Lin, L. K. Saul, and D. D. Lee, “Multiplicative updates for nonnegative quadratic programming,” Neural Computation, vol. 19, no. 8, pp. 2004–2031, 2007.
- P. O’Grady and S. Rickard, “Recovery of non-negative signals from compressively sampled observations via non-negative quadratic programming,” in Proceedings of the Signal Processing with Adaptive Sparse Structured Representations (SPARS '09), Saint Malo, France, April 2009.
- J.-F. Aujol, G. Gilboa, T. Chan, and S. Osher, “Structure-texture image decomposition-modeling, algorithms, and parameter selection,” International Journal of Computer Vision, vol. 67, no. 1, pp. 111–136, 2006.
- J. M. Bioucas-Dias, M. A. T. Figueiredo, and J. P. Oliveira, “Total variation-based image deconvolution: a majorization-minimization approach,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), pp. II861–II864, Toulouse, France, May 2006.
- M. A. T. Figueiredo, J. B. Dias, J. P. Oliveira, and R. D. Nowak, “On total variation denoising: a new majorization-minimization algorithm and an experimental comparison with wavalet denoising,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2633–2636, Atlanta, GA, USA, October 2006.
- A. Dempster, N. Laird, and D. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of the Royal Statistical Society B, vol. 39, no. 1, pp. 1–38, 1977.
- J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007.
- Y. Wang, W. Yin, and Y. Zhang, “A fast algorithm for image deblurring with total variation regularization,” Tech. Rep., Rice University, 2007.
- H. Fu, M. Ng, M. Nikolova, and J. Barlow, “Efficient minimization methods of mixed - and - norms for image restoration,” SIAM Journal on Scientific Computing, vol. 27, no. 6, pp. 1881–1902, 2006.
- T. Chan, S. Kang, and J. Shen, “Total variation denoising and enhancement of color images based on the CB and HSV color models,” The Journal of Visual Communication and Image Representation, vol. 12, pp. 422–435, 2001.
- X. Bresson and T. Chan, “Fast dual minimization of the vectorial total variation norm and applications to color image processing,” Inverse Problems and Imaging, vol. 2, no. 4, pp. 455–484, 2008.
- Y.-W. Wen, M. K. Ng, and Y.-M. Huang, “Efficient total variation minimization methods for color image restoration,” IEEE Transactions on Image Processing, vol. 17, no. 11, pp. 2081–2088, 2008.
- P. Rodríguez and B. Wohlberg, “A generalized vector-valued total variation algorithm,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 1309–1312, Cairo, Egypt, November 2009.
- J. Yang, W. Yin, Y. Zhang, and Y. Wang, “A fast algorithm for edge-preserving variational multichannel image restoration,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 569–592, 2009.
- L. Bar, A. Brook, N. Sochen, and N. Kiryati, “Deblurring of color images corrupted by impulsive noise,” IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 1101–1111, 2007.
- P. Rodríguez, “Multiplicative updates algorithm to minimize the generalized total variation functional with a non-negativity constraint,” in Proceedings of the 17th IEEE International Conference on Image Processing, ICIP 2010, pp. 2509–2512, Hong Kong, China, September 2010.
- A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2419–2434, 2009.
- R. Chartrand and B. Wohlberg, “Total-variation regularization with bound constraints,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 766–769, Dallas, TX, USA, March 2010.
- P. Rodríguez, “A non-negative quadratic programming approach to minimize the generalized vector-valued total variation functional,” in Proceedings of the European Signal Processing Conference (EUSIPCO '10), pp. 314–318, Aalborg, Denmark, 2010.
- D. Strong and T. Chan, “Spatially and scale adaptive total variation based regularization and anisotropic diffusion in image processing,” Diusion in Image Processing, UCLA Math Department CAM Report, 1996.
- S. Ramani, T. Blu, and M. Unser, “Monte-Carlo sure: a black-box optimization of regularization parameters for general denoising algorithms,” IEEE Transactions on Image Processing, vol. 17, no. 9, pp. 1540–1554, 2008.
- Y. Lin, B. Wohlberg, and H. Guo, “UPRE method for total variation parameter selection,” Signal Processing, vol. 90, no. 8, pp. 2546–2551, 2010.
- M. Nikolova, “A variational approach to remove outliers and impulse noise,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 99–120, 2004.
- A. Chambolle, “Total variation minimization and a class of binary MRF models,” in Proceedings of the 5th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, vol. 3757, pp. 136–152, 2005.
- J. Darbon and M. Sigelle, “Image restoration with discrete constrained total variation part I: fast and exact optimization,” Journal of Mathematical Imaging and Vision, vol. 26, no. 3, pp. 261–276, 2006.
- J. Darbon and M. Sigelle, “Image restoration with discrete constrained total variation part II: levelable functions, convex priors and non-convex cases,” Journal of Mathematical Imaging and Vision, vol. 26, no. 3, pp. 277–291, 2006.
- D. Goldfarb and W. Yin, “Parametric maximum flow algorithms for fast total variation minimization,” Tech. Rep. CAAM TR07-09, Department of Computational and Applied Mathematics, Rice University, 2007.
- D. S. Hochbaum, “An efficient algorithm for image segmentation, Markov random fields and related problems,” Journal of the ACM, vol. 48, no. 4, pp. 686–701, 2001.
- D. Goldfarb and W. Yin, “Second-order cone programming methods for total variation-based image restoration,” SIAM Journal on Scientific Computing, vol. 27, no. 2, pp. 622–645, 2006.
- R. H. Chan, C.-W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1479–1485, 2005.
- Y. Dong, M. Hintermüller, and M. Rincon-Camacho, “Automated regularization parameter selection in Multi-Scale total variation models for image restoration,” Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 82–104, 2010.
- R. Rojas and P. Rodríguez, “Spatially adaptive total variation image denoising under salt and pepper noise,” in Proceedings of the European Signal Processing Conference (EUSIPCO '11), pp. 314–318, Barcelona, Spain, 2011.
- E. Jonsson, S. Huang, and T. Chan, “Total-variation regularization in positron emission tomography,” UCLA CAM Report 98-48, 1998.
- N. Dey, L. Blanc-Feraud, C. Zimmer et al., “Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microscopy Research and Technique, vol. 69, no. 4, pp. 260–266, 2006.
- R. H. Chan and K. Chen, “Multilevel algorithm for a Poisson noise removal model with total-variation regularization,” International Journal of Computer Mathematics, vol. 84, no. 8, pp. 1183–1198, 2007.
- J. M. Bardsley and A. Luttman, “Total variation-penalized Poisson likelihood estimation for ill-posed problems,” Advances in Computational Mathematics, vol. 31, no. 1–3, pp. 25–59, 2009.
- A. Sawatzky, C. Brune, J. Müller, and M. Burger, “Total variation processing of images with Poisson statistics,” in Proceedings of the 13th International Conference on Computer Analysis of Images and Patterns (CAIP '09), vol. 5702, pp. 533–540, 2009.
- R. M. Willett, Z. T. Harmany, and R. F. Marcia, “Poisson image reconstruction with total variation regularization,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 4177–4180, Hong Kong, China, September 2010.
- I. C. Rodrigues and J. M. R. Sanches, “Convex total variation denoising of poisson fluorescence confocal images with anisotropic filtering,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 146–160, 2011.
- P. Rodriguez, “Total variation regularization for Poisson vector-valued image restoration with a spatially adaptive regularization parameter selection,” in Proceedings of the 7th International Symposium on Image and Signal Processing and Analysis (ISPA '11), pp. 402–407, Dubrovnik, Croatia, September 2011.
- R. Chartrand and V. Staneva, “A quasi-newton method for total variation regularization of images corrupted by non-gaussian noise,” IET Image Processing, vol. 2, pp. 295–303, 2008.
- S. Osher, N. Paragios, L. Rudin, and P. Lions, “Multiplicative denoisdenoising and deblurring: theory and algorithms,” in Geometric Level Set Methods in Imaging, Vision, and Graphics, pp. 103–119, Springer, New York, NY, USA, 2003.
- J. M. Bioucas-Dias and M. A. T. Figueiredo, “Total variation restoration of speckled images using a split-Bregman algorithm,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 3717–3720, Cairo, Egypt, November 2009.
- J. Darbon, M. Sigelle, and F. Tupin, “A note on nice-levelable MRFs for SAR image denoising with contrast preservation,” Rapport Interne ENST 2006D006, Ecole Nat. Supérieure des Télécommunications, Paris, France, 2006.
- S. Durand, J. Fadili, and M. Nikolova, “Multiplicative noise removal using l1 fidelity on frame coefficients,” Journal of Mathematical Imaging and Vision, vol. 36, no. 3, pp. 201–226, 2010.
- J. Shi and S. Osher, “A nonlinear inverse scale space method for a convex multiplicative noise model,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 294–321, 2008.
- L. Xiao, L.-L. Huang, and Z.-H. Wei, “Multiplicative noise removal via a novel variational model,” Eurasip Journal on Image and Video Processing, vol. 2010, Article ID 250768, 2010.
- J. Shen, “On the foundations of vision modeling: I. Weber's law and Weberized TV restoration,” Physica D, vol. 175, no. 3-4, pp. 241–251, 2003.
- P. Rodríguez, “Nonconvex total variation speckled image restoration via nonnegative quadratic programming algorithm,” in Proceedings of the European Signal Processing Conference (EUSIPCO '11), pp. 288–292, Barcelona, Spain, August 2011.
- P. Rodríguez, R. Rojas, and B. Wohlberg, “Mixed gaussian-impulse noise image restoration via total variation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12), pp. 1077–1080, Kyoto, Japan, March 2012.
- J. Cai, R. Chan, and M. Nikolova, “Two-phase methods for deblurring corrupted by impulse plus Gaussian noise,” Inverse Problems and Imaging, vol. 2, no. 2, pp. 187–204, 2008.
- J. Cai, R. Chan, and M. Nikolova, “Fast two-phase image deblurring under impulse noise,” Journal of Mathematical Imaging and Vision, vol. 36, no. 1, pp. 46–53, 2010.
- Y. Xiao, T. Zeng, J. Yu, and M. K. Ng, “Restoration of images corrupted by mixed Gaussian-impulse noise via l1-l0 minimization,” Pattern Recognition, vol. 44, no. 8, pp. 1708–1720, 2011.
- Signal and I. P. I. of the University of Southern California, “The USCSIPI image database,” http://sipi.usc.edu/database/.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
- P. Rodriguez, “IRLS based TV algorithms for different noise models, Matlab central file,” http://www.mathworks.com/matlabcentral/.