Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Recent Theory and Applications on Inverse Problems 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 206926 | 12 pages | https://doi.org/10.1155/2014/206926

Bound Alternative Direction Optimization for Image Deblurring

Academic Editor: Fatih Yaman
Received08 Mar 2014
Revised19 Jul 2014
Accepted20 Jul 2014
Published13 Aug 2014

Abstract

This paper proposes a new method, bound alternative direction method (BADM), to address the minimization problems in image deblurring. The approach is to first obtain a bound unconstrained problem through bounding the regularizer by a novel majorizer and then, based on a variable splitting, to reformulate the bound unconstrained problem into a constrained one, which is then addressed via an augmented Lagrangian method. The proposed algorithm actually combines the reweighted minimization method and the alternating direction method of multiples (ADMM) such that it succeeds in extending the application of ADMM to minimization problems. The conducted experimental studies demonstrate the superiority of the proposed algorithm for the synthesis minimization over the state-of-the-art algorithms for the synthesis minimization on image deblurring.

1. Introduction

The mission of image deblurring is to restore an original image from noisy blurred observation modeled as where (stacking an -image into an -vector in lexicographic order), is the matrix representation of a convolution operator, and is Gaussian white noise. This imaging inverse problem has recently inspired a considerable amount of research ([111] and further references therein) in which the optimization problems fall into two varieties: synthesis formulation and analysis formulation [12] which are detailed below.

1.1. Formulations and Algorithms

In the synthesis formulation, the unknown image is represented as , where is a wavelet frame or a redundant dictionary and are the sparse coefficients estimated usually via one sparsity-promoting regularizer, yielding where is the positive regularization parameter and is the regularizer, typically the norm. The deblurred image is then obtained by .

In the analysis formulation, as opposed to (2), it minimizes the cost function with respect to : where is a sparsifying transform (such as wavelet or finite differences) and analyzes the image itself against the coefficients that in (2) works on. If is a wavelet frame, then usually where ; if is a matrix representing finite differences at horizontal and vertical directions, then is the discrete anisotropic or isotropic total variation [8, 9].

In the past several decades, a lot of algorithms have been proposed to solve (2) and (3). Among these, the iterative shrinkage/thresholding (IST, [2]) algorithm can be considered as the standard one. However, IST tends to be slow in particular when in (1) is poorly conditioned. To overcome this problem, several fast variants of IST have been proposed. They are TwIST [13], FISTA [14], and SpaRSA [15]. TwIST is actually a two-step version of IST in which each iterate depends on the two previous iterates, rather than only on the previous one (as in IST); FISTA is a nonsmooth variant of Nesterov’s optimal gradient-based algorithm for smooth convex problems [16, 17]; and SpaRSA adopted more aggressive choices of step size in each iteration. These three algorithms, which only use the gradient information of the data-fitting term, have been shown to clearly outperform IST. Some other efficient algorithms, using the second-order information of the data-fitting term, have also been proposed. A noble representative is the so-called split augmented Lagrangian shrinkage algorithm (SALSA) [10, 11], which was found to be faster than the previous FISTA, TwIST, and SpaRSA when the matrix inversion (where is a positive parameter) can be efficiently computed (e.g., if is a circulant matrix, then the matrix inversion can be efficiently calculated by FFT). Actually, SALSA is an instance of alternating direction method of multipliers (ADMM) [18, 19] which has a close relationship [20] with the Bregman iterations [2124], amongst which, the split Bregman method (SBM) [22] has been recently frequently applied to handle imaging inverse problems [2527].

1.2. Minimization

Convergences of above algorithms are guaranteed, benefiting from the fact that the regularizer in (2) and (3) is usually convex. In this paper, a nonconvex (also nonsmooth) regularizer, that is, the () norm, is adopted since using the norm is able to find sparser solution than using the norm which was demonstrated in many studies [2832]. Thus, there comes the minimization problem of the synthesis formulation: and of the analysis formulation: where .

Next, a brief survey on the literature of minimization is given. Recently, remarkable attention has been paid to the minimization problem: where . Some sufficient conditions of (6) have been established in [29, 3335]. Many efficient reweighted minimization methods address (6) through iteratively solving where is the estimated signal at the th iteration; is the data-fitting term which preserves the consistency; for instance, for the Gaussian noise; is the regularization term at the iteration, which is aiming to encourage the desirable properties of the target such as sparsity. Most of reweighted minimization methods (see [2, 28, 30, 31, 34, 36, 37] and many references therein) focus on the reweighted (IRL1) and the reweighted (IRL2) minimization, and their corresponding regularizers are and , respectively, where is the component of and is the weight of at the iteration and it is a function of the previous estimated component . In IRL1, usually, , while in IRL2, where . Notice that is always set to 1 (e.g., IRL1: [28] and IRL2: [2]) or (e.g., IRL1: [34, 36, 38] and IRL2: [30, 31, 38]), even [32]. It is worth noting that above weights are separable, even though there also exist some reweighted minimization methods [39, 40] with inseparable weights.

1.3. Proposed Approach

In this paper, the minimization problem will be used for image deblurring, but the resulting problem cannot be efficiently solved by above minimization methods stated in Section 1.2, since they usually adopt slow iterations (such as the generic IST algorithm), and in image deblurring, the matrix-vector products are always computationally expensive. Considering this, the ADMM is used in this paper to tackle the resulting minimization problem because as stated in Section 1.1, the image deblurring problems can be efficiently handled by the ADMM. The proposed approach is to first bound it via a variant of the majorizer proposed in [4, 7], obtaining a bound unconstrained one; then reformulate it into a constrained problem via the technique of variable splitting [41, 42]; and lastly attack this resulting constrained problem using an augmented Lagrangian method (ALM) [43]. If the majorizer that is proposed in [4, 7] is used to bound the original minimization unconstrained problem, then the components of the estimate in the iterations would be stuck at zero forever if they became zero, which very likely prevents convergence to a minimizer. To overcome this problem, in this paper, a variant of this majorizer is proposed by adding a small positive parameter that shrinks in each iteration. The obtained bound unconstrained problem is reformulated into a constrained one through variable splitting which splits the original variable into a pair of variables, serving as the arguments of the data-fitting term and the regularizer, respectively, under the constraint that these two variables have to be equal. This resulting constrained problem is then attacked by an ALM, obtaining a Lagrangian function with two variables resulting from variable splitting. Next, the Lagrangian function is alternatively solved with respect to the two variables, leading to the method called bound alternative direction method (BADM), which is equivalent to the combination of the reweighted minimization method and the ADMM and is able to extend the application of ADMM to the minimization problems.

The proposed BADM has only cost in each iteration in solving the synthesis formulation with a normalized Parseval frame. Experiments on a set of benchmark problems show that the BADM for (4) is favorably competitive with the state-of-the-art algorithms FISTA [14], SALSA [11], and SBM [22] for (2).

1.4. Terminology and Notation

In this section, some useful elements of convex analysis will be given. Let be a real Hilbert space equipped with the inner product and associated norm . Let be a function and let be the class of all lower semicontinuous convex functions that are not equal to everywhere and are never equal to . The proximity operator of is defined as where is positive. If , then is unique [44]; if , the indicator function of a nonempty closed convex set , then becomes the projection of onto , and in this sense, (8) is therefore a generalization of projection operator; and if is the norm, then (8) becomes a well-known soft thresholding: where is the componentwise multiplication between two vectors of the same dimension and sign  is the sign function.

2. Bound Alternative Direction Optimization

Consider a regularized unconstrained model where and is a smooth function with -Lipschitz-continuous gradient and is bounded below. It is unsuitable to directly apply the existing proximal splitting algorithms (such as IST, FISTA, TwIST, and SpaRSA discussed in the section of Introduction) to solve (10), since, for , the nonconvex nature of blocks the use of the proximity operator (see (8)), which is well defined only on the functions which belong to . To overcome this problem, a bound optimization approach will be considered in this paper.

2.1. Bound Optimization
2.1.1. Bound (x) via the Majorizer Proposed in [7]

Since , it is reasonable to bound through where with where . It is easy to see that with equality for , since with equality for . One benefit from the above bound is that the following lemma holds.

Lemma 1. Given , belongs to with respect to .

Proof. Given for , if , then is an affine function of and thus , since ; if , then equals if and otherwise, leading to . Therefore, .

Therefore, the proximity operator of given is obtained by where with

Notice that soft for any .

From above, a closed-form solution of the proximity operator of can be obtained after the bound operation. However, in many iteratively minimization algorithms discussed in the section of Introduction, the proximity operator of the regularization term is commonly used. For , a nature way is to set as the previous estimate and obtain the current estimate by computing the proximity operator of ; that is, where is an iteratively temporary variable which differs in each algorithm. According to (13) with (14), if a component of becomes zero forever, then this component of is set to zero and will be stuck at zero forever, which may prevent convergence to a local minimizer, letting alone a global one. To overcome this shortcoming, a new bound method is proposed below.

2.1.2. Proposed Bound Method

A method is presented that bounds through where with where and is a small positive parameter. It is clear that with equality for (where is a vector composed of all zeros) or with . has the following property.

Lemma 2. belongs to with respect to .

Proof. For any ,  ; thus is an affine function of and thus . Therefore, .

Therefore, a closed-form solution of the proximity operator of can be obtained where .

2.1.3. Iterative Bound Optimization (IBO)

Now, it is ready to propose a framework of IBO as in Algorithm 1.

(1)Set , and choose an arbitrary , and .
(2)repeat
(3)
(4)
(5)
(6)until some stopping criterion is satisfied.

A key observation is that the sequence , generated by the IBO, approaches to zero as , such that (10) can be solved by the IBO which iteratively solves a sequence of problems: where is the th component of . Any accumulation point of the sequence generated by the IBO is a first-order stationary point of (19), which is guaranteed by the following theorem.

Theorem 3. Let the sequence be generated by above IBO and suppose that is an accumulation point of ; then is a first-order stationary point of (19).

Proof. IBO is actually a specific case of the reweighted () method proposed in [45], such that this theorem is also a specific case of Theorem  3.1 in [45].

2.2. Bound Alternative Direction Method

Considering the unconstrained optimization problem that corresponds to Step 3 of IBO, Using the technique of variable splitting, (20) becomes a constrained optimization problem: The rationale behind variable splitting is that it may be easier to solve (21) than it is to solve (20). The augmented Lagrangian function of (21) is where is the vector of Lagrangian multipliers and is the penalty parameter. According to the augmented Lagrangian method (ALM) [43], (21) can be solved through repeating the following iterative process until some stopping criterion is satisfied: minimize (22) with respect to and fixing and setting to the previous estimate of and then update , yielding a variant of ALM called bound ALM (BALM) (Algorithm 2).

(1)Set , and choose , , , and an arbitrary .
(2)repeat
(3)
(4) =
(5) =
(6)
(7)until some stopping criterion is satisfied

Notice that the terms added to in the definition of (see (22)) can be reformulated as a single quadratic term, leading to a variant of BALM (Algorithm 3).

(1)Set , and choose , , , and an arbitrary .
(2)repeat
(3)
(4) =
(5) =
(6)
(7)until some stopping criterion is satisfied.

In (Algorithm 3), corresponds to that these two parameters are used in BALM. It is usually difficult to simultaneously obtain and in Step 3. To overcome this difficulty, the technique of nonlinear block Gauss-Seidel (NLBGS) [46] is naturally used, in which the objective function of Step 3 is solved by alternatively minimizing it with respect to and , while keeping the other variable fixed, yielding the following bound alternative direction method (BADM) (Algorithm 4).

(1)Set , and choose , and an arbitrary .
(2)repeat
(3)
(4)
(5)
(6)
(7)
(8)until some stopping criterion is satisfied.

In (Algorithm 4), Step 3 is equivalent to , while Step 4 (see (13) and (9)) is equivalent to Moreover, since the objective function of (10) is nonconvex for , BADM cannot be guaranteed to converge to a global optimum. Nevertheless, as stated in Theorem 3, the proposed algorithm is able to obtain a stationary point, which in practice is always a good quality deblurred image.

3. Image Deblurring Using Synthesis Formulation and BADM

In this section, the synthesis formulation (see (4)) and the BADM are applied to image deblurring. Note that using the analysis formulation can be naturally extended and will be left as future work. Therefore, here , and is the proposed majorizer of   at iteration, which are inserted into the BADM yielding (Algorithm 5), where Step 4 is derived from (18). Assume that represents a (periodic) convolution and is a normalized Parseval frame (i.e., , where is the identity matrix and possibly ). According to the Sherman-Morrison-Woodbury matrix inversion formula, Moreover, under the periodic boundary condition for , is block circulant, such that can be diagonalized by two-dimensional discrete Fourier transform (DFT). Let ; then is equivalent to a filter in the Fourier domain and the cost of products by using FFT algorithms is [10]. Thus, where . As the computational complexity analysis in [10], the cost of computing is . Moreover, computing is the soft thresholding whose cost is and computing also has cost. Therefore, each iteration of the BADM for (4) has cost.

(1)Set , and choose , , , and an arbitrary .
(2)repeat
(3) =
(4) = soft
(5) =
(6) =
(7)
(8)until some stopping criterion is satisfied.

4. Experiments

In this section, the proposed algorithm BADM for (4) is compared with the state-of-the art algorithms: FISTA [14], SALSA [11], and SBM [22] for (2) (from now on, the BADM is only for (4) while FISTA, SALSA, and SBM are for (2) if without specification). Consider the low-frequency images (Cameraman), high-frequency images (Mandril), and both low- and high-frequency images (Lena) (see Figure 1), with size pixels, corrupted by the following three benchmark cases [5, 11]: (I) uniform blur of size and noise variance (termed UNI); (II) Gaussian blur kernel with (termed GAU); (III) the point spread function of the blur operator is for and (termed PSF). All the experiments were performed using MATLAB on a 64-bit Windows 7 PC with an Intel Core i7 3.07 GHz processor and 6.0 GB of RAM. In order to measure the performance of different algorithms, the following four metrics (where is the original image and and are the observed image and the estimated image at the iteration, resp.) are employed: (a) consumed CPU time (Time); (b) number of iterations (Iterations); (c) mean square error (); and (d) improvement in SNR () and the stopping criterion is chosen as , where , and , as well as other necessary parameters (such as , for the proposed BADM algorithm), is hand tuned for each algorithm in each experiment for the best ISNR; that is, for experiment (I); for (II); and for (III) and in (I), (II), and (III).

The obtained results of four metrics for the Cameraman, Mandril, and Lena images are listed in Tables 1, 2, and 3, respectively, and the deblurred images by BADM are shown in Figures 2, 3, and 4, respectively. And the evolutions of objective function and MSE by different algorithms in experiments (I), (II), and (III) (to avoid redundancy, only for the results of Cameraman images) are shown in Figures 5, 6, and 7, respectively. Moreover, the results of time, MSE, and ISNR over are shown in Figure 8. From above results, it is clear that the BADM outperforms the FISTA, SALSA, and SBM in terms of MSE and ISNR.


Algorithm Time (seconds)IterationsMSEISNR
UNIGAUPSFUNIGAUPSFUNIGAUPSFUNIGAUPSF

FISTA16.5211.015.021509847102.71217.85112.327.192.674.39
SALSA0.802.171.7248690.40214.41109.257.802.754.52
SBM0.952.531.5048690.40214.36109.257.802.754.52
BADM1.443.281.5549685.42212.29105.188.05 2.784.66


AlgorithmTime (seconds)IterationsMSEISNR
UNIGAUPSFUNI GAU PSF UNI GAU PSF UNI GAU PSF

FISTA 56.82 49.11 17.22 110 94 33 119.23 288.45 142.12 5.50 1.42 2.19
SALSA 3.41 5.41 3.63 485 119.12 255.30 82.98 5.59 1.95 4.51
SBM3.064.512.81485 119.63 255.30 82.98 5.59 1.95 4.51
BADM 4.26 9.34 3.60 485117.24246.1281.155.692.114.60


Algorithm Time (seconds) Iterations MSE ISNR
UNI GAU PSF UNI GAU PSF UNI GAU PSF UNI GAU PSF

FISTA 53.13 40.8 21.41 98 81 42 43.89 82.84 53.90 6.01 2.82 2.17
SALSA 2.91 4.95 4.09 4 6 6 41.15 72.42 34.32 6.15 3.34 4.14
SBM 2.524.073.424 6 6 41.15 72.42 34.32 6.15 3.34 4.14
BADM 4.22 5.21 6.66 45638.0669.1034.106.473.594.18

5. Conclusions

This paper has proposed a new bound alternative direction method (BADM) for the minimization problems in image deblurring. In order to solve the unconstrained optimization problem, the idea of BADM is to first bound the regularizer to obtain a bound unconstrained problem, which is then reformulated into a constrained one by variable splitting. The resulting constrained problem is further addressed by an augmented Lagrangian method, more specifically, the alternating direction method of multipliers (ADMM). Therefore, the BADM is an extension of the ADMM to solve the minimization problems. Experiments on a set of image deblurring problems have shown that the BADM for the synthesis formulation is favorably competitive with the state-of-the-art algorithms for the synthesis formulation.

In future work, the BADM will be applied to the analysis formulation and other applications such as in painting and magnetic resonance imaging.

Conflict of Interests

The author declares that there is no conflict of interests regarding to the publication of this paper.

Acknowledgments

This work was partially supported by the China Scholarship Council (CSC: 2010611017). Xiangrong Zeng would like to thank the anonymous reviewers who have helped to improve the quality of this paper.

References

  1. C. Chaux, P. L. Combettes, J.-C. Pesquet, and V. R. Wajs, “A variational formulation for frame-based inverse problems,” Inverse Problems, vol. 23, article 1495, no. 4, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  2. I. Daubechies, M. Defrise, and C. de Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  3. M. Elad, B. Matalon, and M. Zibulevsky, “Image denoising with shrinkage and redundant representations,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1924–1931, June 2006. View at: Publisher Site | Google Scholar
  4. M. A. T. Figueiredo and R. D. Nowak, “A bound optimization approach to wavelet-based image deconvolution,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 2, pp. 782–785, September 2005. View at: Publisher Site | Google Scholar
  5. M. A. T. Figueiredo and R. D. Nowak, “An EM algorithm for wavelet-based image restoration,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 906–916, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  6. J. M. Bioucas-Dias, “Bayesian wavelet-based image deconvolution: a gem algorithm exploiting a class of heavy-tailed priors,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 937–951, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  7. M. A. T. Figueiredo, J. M. Bioucas-Dias, and R. D. Nowak, “Majorization-minimization algorithms for wavelet-based image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2980–2991, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  8. T. Chan, S. Esedoglu, F. Park, and A. Yip, “Recent developments in total variation image restoration,” in Mathematical Models of Computer Vision, vol. 17, 2005. View at: Google Scholar
  9. J.-F. Cai, B. Dong, S. Osher, and Z. Shen, “Image restoration: total variation, wavelet frames, and beyond,” Journal of the American Mathematical Society, vol. 25, no. 4, pp. 1033–1089, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  10. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 681–695, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  11. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Transactions on Image Processing, vol. 19, no. 9, pp. 2345–2356, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  12. M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,” Inverse Problems, vol. 23, no. 3, pp. 947–968, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  13. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  14. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  15. S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479–2493, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  16. Y. Nesterov, Introductory Lectures on Convex Optimization, 2004.
  17. Y. Nesterov, “A method of solving a convex programming problem with convergence rate o (1/k2),” Soviet Mathematics Doklady, vol. 27, no. 2, pp. 372–376, 1983. View at: Google Scholar
  18. J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 1–3, pp. 293–318, 1992. View at: Publisher Site | Google Scholar | MathSciNet
  19. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  20. S. Setzer, “Split bregman algorithm, douglas-rachford splitting and frame shrinkage,” in Scale Space and Variational Methods in Computer Vision, vol. 5567 of Lecture Notes in Computer Science, pp. 464–476, Springer, Berlin, Germany, 2009. View at: Publisher Site | Google Scholar
  21. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algorithms for l1-minimization with applications to compressed sensing,” SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143–168, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  22. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  23. S. Osher, Y. Mao, B. Dong, and W. Yin, “Fast linearized Bregman iteration for compressive sensing and sparse denoising,” Communications in Mathematical Sciences, vol. 8, no. 1, pp. 93–111, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  24. A. Langer and M. Fornasier, “Analysis of the adaptive iterative bregman algorithm,” preprint, 2010. View at: Google Scholar
  25. H. Zhang, L. Cheng, and J. Li, “Reweighted minimization model for MR image reconstruction with split Bregman method,” Science China: Information Sciences, vol. 55, no. 9, pp. 2109–2118, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  26. J. Cai, S. Osher, and Z. Shen, “Split bregman methods and frame based image restoration,” Multiscale Modeling and Simulation, vol. 8, no. 2, pp. 337–369, 2009. View at: Publisher Site | Google Scholar
  27. S. Setzer, G. Steidl, and T. Teuber, “Deblurring Poissonian images by split Bregman techniques,” Journal of Visual Communication and Image Representation, vol. 21, no. 3, pp. 193–199, 2010. View at: Publisher Site | Google Scholar
  28. E. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted l1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5, pp. 877–905, 2008. View at: Publisher Site | Google Scholar
  29. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at: Publisher Site | Google Scholar
  30. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3869–3872, Las Vegas, Nev, USA, April 2008. View at: Publisher Site | Google Scholar
  31. R. Saab, R. Chartrand, and Ö. Yilmaz, “Stable sparse approximations via nonconvex optimization,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3885–3888, April 2008. View at: Publisher Site | Google Scholar
  32. Z. Xu, X. Chang, F. Xu, and H. Zhang, “L1/2 regularization: a thresholding representation theory and a fast solver,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 7, pp. 1013–1027, 2012. View at: Publisher Site | Google Scholar
  33. R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,” Inverse Problems, vol. 24, no. 3, Article ID 035020, 14 pages, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  34. S. Foucart and M. Lai, “Sparsest solutions of underdetermined linear systems via lq-minimization for 0<q<1,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395–407, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  35. Q. Sun, “Recovery of sparsest signals via q-minimization,” Applied and Computational Harmonic Analysis, vol. 32, no. 3, pp. 329–341, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  36. X. Chen and W. Zhou, “Convergence of reweighted 1 minimization algorithms and unique solution of truncated p minimization,” Tech. Rep., The Hong Kong Polytechnic University, 2010. View at: Google Scholar
  37. D. Needell, “Noisy signal recovery via iterative reweighted L1-minimization,” in Proceedings of the 43rd Asilomar Conference on Signals, Systems and Computers, pp. 113–117, Pacific Grove, Calif, USA, November 2009. View at: Publisher Site | Google Scholar
  38. I. Daubechies, R. DeVore, M. Fornasier, and C. S. Güntürk, “Iteratively reweighted least squares minimization for sparse recovery,” Communications on Pure and Applied Mathematics, vol. 63, no. 1, pp. 1–38, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  39. D. Wipf and S. Nagarajan, “Iterative reweighted 1 and 2 methods for finding sparse solutions,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 317–329, 2010. View at: Publisher Site | Google Scholar
  40. M. A. Khajehnejad, W. Xu, A. S. Avestimehr, and B. Hassibi, “Improved sparse recovery thresholds with two-step reweighted 1 minimization,” in Proceeding of the IEEE International Symposium on Information Theory (ISIT '10), pp. 1603–1607, Austin, Tex, USA, June 2010. View at: Publisher Site | Google Scholar
  41. R. Courant, “Variational methods for the solution of problems of equilibrium and vibrations,” Bulletin of the American Mathematical Society, vol. 49, no. 1, pp. 1–23, 1943. View at: Google Scholar | MathSciNet
  42. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  43. J. Nocedal and S. Wright, Numerical Optimization, Springer, 1999. View at: Publisher Site | MathSciNet
  44. H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, New York, NY, USA, 2011. View at: Publisher Site | MathSciNet
  45. Z. Lu, “Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming,” Mathematical Programming, pp. 1–31, 2012. View at: Google Scholar
  46. L. Grippo and M. Sciandrone, “On the convergence of the block nonlinear Gauss-Seidel method under convex constraints,” Operations Research Letters, vol. 26, no. 3, pp. 127–136, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

Copyright © 2014 Xiangrong Zeng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

703 Views | 508 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.