Nonlinear Analysis: Algorithm, Convergence, and Applications 2014
View this Special IssueResearch Article  Open Access
Nonmonotone Adaptive BarzilaiBorwein Gradient Algorithm for Compressed Sensing
Abstract
We study a nonmonotone adaptive BarzilaiBorwein gradient algorithm for norm minimization problems arising from compressed sensing. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the norm. Under some suitable conditions, its global convergence result could be established. Numerical results illustrate that the proposed method is promising and competitive with the existing algorithms NBBL1 and TwIST.
1. Introduction
In recent years, algorithms for finding sparse solutions to underdetermined linear systems of equations have been intensively investigated in signal processing and compressed sensing. The fundamental principle of compressed sensing (CS) is that a sparse signal can be recovered from the underdetermined linear system , where (often ). By defining norm () of a vector as the number of nonzero components in , one natural way to reconstruct from the system is to solve the following problem: via certain reconstruction technique. However, the norm problem is combinatorial and generally computationally intractable. A fundamental decoding model in CS is to replace norm by norm, which is defined as . The resulting adaptation of (1) is the socalled basis pursuit (BP) problem [1] It is shown that, under some reasonable conditions, problem (2) can produce the desired solutions with high probability [2]. When contains some noise in most practical applications, the constraint in (2) should be relaxed to the penalized least squares problem: Here, is related to the Lagrange multiplier of the constraint in (2).
It follows from some existing results that if a signal is sparse or approximately sparse in some orthogonal basis, then an accurate recovery can be obtained when is a random matrix projection [3]. Quite a number of algorithms have been proposed and studied for solving the aforementioned problems arising in CS. Recently, some firstorder methods are popular for solving (3), such as the projection steepest descent method [4] and the gradient projection algorithm (GPSR) proposed by Figueiredo et al. [5]. Moreover, based on a smoothing technique studied in [6], a fast and accurate firstorder algorithm called NESTA was proposed in [7], and so on. By an operator splitting technique, Hale et al. derive the iterative shrinkage/thresholding fixedpoint continuation algorithm (FPC) [8]. One most widely studied firstorder method is the iterative shrinkage/thresholding (IST) method [9–11], which is designed for waveletbased image deconvolution. TwIST [12, 13] and FISTA [14] speed up the performance of IST and have virtually the same complexity but with better convergence properties. Another closely related method is the sparse reconstruction algorithm SpaRSA [15], which is to minimize nonsmooth convex problem with separable structures. In [16], the authors proposed nonsmooth equationsbased method for norm problems. SPGL1 [17] solves a least squares problem with norm constraint by the spectral gradient projection method with an efficient Euclidean projection on norm ball. In [18], Yun and Toh studied a block coordinate gradient descent (CGD) method for solving (3). Recently, the alternating direction method (ADM) has received much attention for solving total variation regularization problems for image restoration and is also capable of solving the norm regularization problems in CS [19, 20].
Very recently, Xiao et al. propose a BarzilaiBorwein gradient algorithm [21] for solving regularized nonsmooth minimization problems (NBBL1) [22], in which they approximate locally by a convex quadratic model at each iteration, where the Hessian is replaced by the multiples of a spectral coefficient with an identity matrix. Motivated by them, we propose a nonmonotone adaptive BarzilaiBorwein gradient algorithm for norm minimization in compressed sensing, which is based on a new quasiNewton equation [23] and a new adaptive spectral coefficient. Under reasonable assumptions, its convergence result could be established. Numerical experiments illustrate that the proposed method is efficient to recover a sparse signal arising in compressive sensing and outperforms NBBL1.
A full description of the proposed algorithm is presented in the next section. Meanwhile, we establish its global convergence under some suitable conditions. In Section 3, some numerical results were reported to illustrate the efficiency of the proposed method. Finally, we have a conclusion section.
2. Proposed Algorithm and Convergence Result
In this section, we construct an iterative algorithm to solve the norm regularization problems arising from the spare solution recovery in compressed sensing. Before stating the steps of our method, we first give a brief description of preliminary results for the following unconstrained optimization: where is a continuously differentiable function.
In [23], Wei et al. proposed a new quasiNewton equation and then derived a new conjugacy condition by using this new quasiNewton equation. Using the Taylor formula for the objective function , where (resp., ) denotes the function value (resp., Hessian matrix) and denotes at . Hence, substituting , Therefore, Consider as a new approximation of such that where , , , and . This suggests the following new quasiNewton equation: In [24], Li et al. make a modification of the in (9) as follows: and Yuan and Wei [25] make some further studies on it. Observe that this new quasiNewton equation contains not only gradient value information but also function value information at the present and the previous step. In general, such will be produced by updating with some typical and popular formulae such as BFGS, DFP, and SR1. Furthermore, let the approximation Hessian be a diagonal matrix with positive components; that is, with an identity matrix and . Then, the quasiNewton condition (9) possesses the following form: Multiplying both sides by , it follows that and, multiplying both sides by , it gives where is defined as (10). If holds, then the matrix is positive definite which ensures that the search direction is descent at current point.
Now, we focus our attention on the norm minimization problem (3). The algorithm can be described as the iterative form where is a stepsize and is a search direction defined by minimizing a quadratic approximated model of . Since term is not differentiable, hence, at current , objective function is approximated by the quadratic approximation , where and is a small positive number. The term in can be considered as an approximate Taylor expansion of with a small , and the case reduces the equivalent form . Minimizing (15) yields where , , and denote the th component of , , and , respectively. The favorable structure of (16) admits the explicit solution Hence, the search direction at current point is
In this paper, we adopt the following adaptive BarzilaiBorwein step in (18): where and are defined in (12) and (13), respectively.
In the light of all derivations above, we now describe the nonmonotone adaptive BarzilaiBorwein gradient algorithm (abbreviated as NABBL1) (see Algorithm 1).

Remark 1. We have shown that if , then the generated direction is descent. However, in this case, the condition may fail to be fulfilled and the hereditary descent property is not guaranteed any more. To cope with this defect, we should keep the sequence uniformly bounded; that is, for sufficiently small and sufficiently large , the is forced as
This approach ensures that is bounded from zero and subsequently ensures that is descent at periteration.
We prepare to show our main global convergence result of algorithm NABBL1. The desirable convergence is directly from Theorem 3.3 of [22]; we state it for completeness here.
Theorem 2. Let the sequences and be generated by Algorithm 1. Then, there exists a subsequence such that
3. Experimental Results
In this section, we describe some experiments to illustrate the good performance of the algorithm NABBL1 for reconstructing sparse signals. These experiments are all tested in Matlab R2010a. The relative error is used to measure the quality of the reconstructive signals which is defined as where denotes the reconstructive signal and denotes the original signal.
In our experiments, we consider a typical compressive sensing scenario, where the goal is to reconstruct an length sparse signal from observations. The random is the Gaussian matrix whose elements are generated from shape . normal distributions (randn in Matlab). In real applications, the measurement is usually contaminated by noise; that is, , where is the Gaussian noise distributed as .
We test a small size signal with , ; the original contains randomly nonzero elements. The proposed algorithm starts at a zero point and terminates when the relative change of two successive points is sufficiently small; that is, In this experiment, we take , , , and . In the line search, we choose , , , and . The original signal, the limited measurement, and the reconstructed signal when the noise level are given in Figure 1.
(a)
(b)
(c)
Comparing (a) to (c) in Figure 1, we clearly see that the original sparse signal is restored almost exactly. We see that all the blue peaks are circled by the red circles, which illustrates that the original signal has been found almost exactly. Altogether, this simple experiment shows that our algorithm performs quite well and provides an efficient approach to recover large sparse nonnegative signal.
We choose four different signals with noise level of compared with algorithms NBBL1 [22] and TwIST [13] in our next experiment. In order to test the speed of the algorithms more fairly, we list the average of the five results in Table 1. Numerical results are listed in Table 1, in which we report the CPU time in seconds (time) required for the whole reconstructing process and the relative error (RelErr). From Table 1, we can see that algorithm NABBL1 is faster than algorithms NBBL1 and TwIST, and the number of iterations of algorithm NABBL1 is less than that of the algorithms NBBL1 and TwIST with different signals.

From Figure 2, NABBL1 usually decreases relative errors faster than NBBL1 and TwIST throughout the entire iteration process. We conclude that NABBL1 provides an efficient approach for solving regularized nonsmooth problem from compressed sensing and is competitive with or performs better than NBBL1 and TwIST.
(a)
(b)
4. Conclusion
In this paper, we proposed a nonmonotone adaptive BarzilaiBorwein algorithm (NABBL1) for solving a regularized least squares problem arising from spare solution recovery in compressed sensing. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the norm. Numerical results illustrate that the proposed method is promising and competitive with the existing algorithms NBBL1 and the twostep IST (TwIST). Our future topic is to extend NABBL1 method for solving matrix trace norm minimization problems in compressed sensing or some minimization problems in computed tomography reconstruction [26, 27].
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at: Google Scholar
 D. L. Donoho, “For most large underdetermined systems of linear equations the minimal ${l}_{1}$norm solution is also the sparsest solution,” Communications on Pure and Applied Mathematics, vol. 59, no. 6, pp. 797–829, 2006. View at: Publisher Site  Google Scholar
 M. F. Duarte and Y. C. Eldar, “Structured compressed sensing: from theory to applications,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4053–4085, 2011. View at: Publisher Site  Google Scholar
 I. Daubechies, M. Fornasier, and I. Loris, “Accelerated projected gradient method for linear inverse problems with sparsity constraints,” Journal of Fourier Analysis and Applications, vol. 14, no. 56, pp. 764–792, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal on Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at: Publisher Site  Google Scholar
 Y. Nesterov, “Smooth minimization of nonsmooth functions,” Mathematical Programming, vol. 103, no. 1, pp. 127–152, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Becker, J. Bobin, and E. J. Candés, “NESTA: a fast and accurate firstorder method for sparse recovery,” SIAM Journal on Imaging Sciences, vol. 4, no. 1, pp. 1–39, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. T. Hale, W. Yin, and Y. Zhang, “Fixedpoint continuation for ${l}_{1}$minimization: methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008. View at: Publisher Site  Google Scholar
 M. A. T. Figueiredo and R. D. Nowak, “An EM algorthim for waveletbesed image restoration,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 906–916, 2003. View at: Publisher Site  Google Scholar
 M. A. T. Figueiredo and R. D. Nowak, “A bound optimization approach to waveletbased image deconvolution,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 2, pp. 782–785, Genoa, Italy, September 2005. View at: Publisher Site  Google Scholar
 R. D. Nowak and M. A. T. Figueiredo, “Fast waveletbased image deconvolution using the EM algorithm,” in Proceedings of the 35th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 371–375, Pacific Grove, Calif, USA, November 2001. View at: Google Scholar
 J. M. BioucasDias and M. A. T. Figueiredo, “A new TwIST: twostep iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007. View at: Publisher Site  Google Scholar
 J. M. BioucasDias and M. A. T. Figueiredo, “Twostep algorithms for linear inverse problems with nonquadratic regularization,” in Proceedings of the 14th IEEE International Conference on Image Processing (ICIP '07), vol. 1, pp. 105–108, San Antonio, Tex, USA, September 2007. View at: Publisher Site  Google Scholar
 A. Beck and M. Teboulle, “A fast iterative shrinkagethresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Science, vol. 2, no. 1, pp. 183–202, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479–2493, 2009. View at: Publisher Site  Google Scholar
 Y.H. Xiao, Q.Y. Wang, and Q.J. Hu, “Nonsmooth equations based method for ${l}_{1}$norm problems with applications to compressed sensing,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 11, pp. 3570–3577, 2011. View at: Publisher Site  Google Scholar
 M. P. Friedlander and E. van den Berg, “Probing the pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890–912, 2008. View at: Publisher Site  Google Scholar
 S. Yun and K.C. Toh, “A coordinate gradient descent method for ${l}_{1}$regularized convex minimization,” Computational Optimization and Applications, vol. 48, no. 2, pp. 273–307, 2011. View at: Publisher Site  Google Scholar
 J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1L2 signal reconstruction from partial Fourier data,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 288–297, 2010. View at: Publisher Site  Google Scholar
 J. Yang and Y. Zhang, “Alternating direction algorithms for ${l}_{1}$problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at: Publisher Site  Google Scholar
 J. Barzilai and J. M. Borwein, “Twopoint step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 Y.H. Xiao, S.Y. Wu, and L.Q. Qi, “Nonmonotone BarzilaiBorwein gradient algorithm for ${l}_{1}$regularized nonsmooth minimization in compressive sensing,” Journal of Scientific Computing, 2014. View at: Publisher Site  Google Scholar
 Z. Wei, G. Li, and L. Qi, “New quasiNewton methods for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 175, no. 2, pp. 1156–1188, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 G. Li, C. Tang, and Z. Wei, “New conjugacy condition and related new conjugate gradient methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 202, no. 2, pp. 523–539, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 G. Yuan and Z. Wei, “Convergence analysis of a modified BFGS method on convex minimizations,” Computational Optimization and Applications, vol. 47, no. 2, pp. 237–255, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. Ma, J. Huang, H. Zhang et al., “Lowdose computed tomography image restoration using previous normaldose scan,” Medical Physics, vol. 38, no. 10, pp. 5713–5731, 2011. View at: Publisher Site  Google Scholar
 J. Ma, H. Zhang, Y. Gao et al., “Iterative image reconstruction for cerebral perfusion CT using a precontrast scan induced edgepreserving prior,” Physics in Medicine and Biology, vol. 57, no. 22, pp. 7519–7542, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Yuanying Qiu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.