Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 358742, 23 pages
http://dx.doi.org/10.1155/2014/358742
Research Article

A Greedy Multistage Convex Relaxation Algorithm Applied to Structured Group Sparse Reconstruction Problems Based on Iterative Support Detection

School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Received 27 April 2014; Revised 27 August 2014; Accepted 29 August 2014; Published 21 October 2014

Academic Editor: Yi-Kuei Lin

Copyright © 2014 Liangtian He and Yilun Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 68, no. 1, pp. 49–67, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  2. F. R. Bach, “Consistency of the group lasso and multiple kernel learning,” Journal of Machine Learning Research, vol. 9, pp. 1179–1225, 2008. View at Google Scholar · View at MathSciNet · View at Scopus
  3. S. Ma, X. Song, and J. Huang, “Supervised group Lasso with applications to microarray data analysis,” BMC Bioinformatics, vol. 8, no. 1, article 60, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Eiwen, G. Taubock, F. Hlawatsch, and H. Feichtinger, Group Sparsity Sethods sor Sompressive Channel Estimation in Doubly Dispersive Multicarrier Systems, 2010.
  5. E. van den Berg, M. Schmidt, M. Friedlander, and K. Murphy, “Group sparsity via linear-time projection,” Tech. Rep., Department of Computer Science, University of British Columbia, Vancouver, Canada, 2008. View at Google Scholar
  6. J. Liu, S. Ji, and J. Ye, SLEP: Sparse Learning with Effcient Projections, Arizona State University, 2009.
  7. Z. Qin, K. Scheinberg, and D. Goldfarb, “Efficient block-coordinate descent algorithms for the Group Lasso,” Mathematical Programming Computation, vol. 5, no. 2, pp. 143–169, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. S. J. Wright, R. D. Nowak, and M. A. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479–2493, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. W. Deng, W. Yin, and Y. Zhang, “Group sparse optimization by alternating direction method,” in Wavelets and Sparsity XV, vol. 8858 of Proceedings of SPIE, 2013. View at Publisher · View at Google Scholar
  10. Y. Wang and W. Yin, “Sparse signal reconstruction via iterative support detection,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 462–491, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” The Journal of Machine Learning Research, vol. 12, pp. 3371–3412, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. S. Kim and E. P. Xing, “Tree-guided group lasso for multi-task regression with structured sparsity,” in Proceedings of the 27th International Conference on Machine Learning (ICML '10), pp. 543–550, June 2010. View at Scopus
  13. F. Bach, R. Jenatton, J. Mairal, and G. Obozinski, “Structured sparsity through convex optimization,” Statistical Science, vol. 27, no. 4, pp. 450–468, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. T. Zhang, “Analysis of multi-stage convex relaxation for sparse regularization,” The Journal of Machine Learning Research, vol. 11, pp. 1081–1107, 2010. View at Google Scholar · View at MathSciNet · View at Scopus
  15. N. Vaswani and W. Lu, “Modified-CS: modifying compressive sensing for problems with partially known support,” IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4595–4607, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. W. Lu and N. Vaswani, “Regularized modified BPDN for noisy sparse reconstruction with partial erroneous support and signal value knowledge,” IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 182–196, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. L. Jacques, “A short note on compressed sensing with partially known signal support,” Signal Processing, vol. 90, no. 12, pp. 3308–3312, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  18. M. P. Friedlander, H. Mansour, R. Saab, and O. Yilmaz, “Recovering compressively sampled signals using partial support information,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1122–1134, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. R. E. Carrillo, L. F. Polania, and K. E. Barner, “Iterative algorithms for compressed sensing with partially known support,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 3654–3657, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. R. E. Carrillo, L. F. Polanía, and K. E. Barner, “Iterative hard thresholding for compressed sensing with partially known support,” in Proceedings of the 36th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '11), pp. 4028–4031, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted l1 minimization,” The Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3869–3872, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. W. Guo and W. Yin, “Edge guided reconstruction for compressive imaging,” SIAM Journal on Imaging Sciences, vol. 5, no. 3, pp. 809–834, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  26. Y. Zhang, An Alternating Direction Algorithm for Nonnegative Matrix Factorization, TR10-03, Rice University, 2010.
  27. Z. Wen, W. Yin, and Y. Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Mathematical Programming Computation, vol. 4, no. 4, pp. 333–361, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  28. Y. Shen, Z. Wen, and Y. Zhang, “Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization,” Optimization Methods & Software, vol. 29, no. 2, pp. 239–263, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. Y. Xu, W. Yin, Z. Wen, and Y. Zhang, “An alternating direction algorithm for matrix completion with nonnegative factors,” Frontiers of Mathematics in China, vol. 7, no. 2, pp. 365–384, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in Optimization, R. Fletcher, Ed., pp. 283–298, Academic Press, New York, NY, USA, 1969. View at Google Scholar · View at MathSciNet
  31. R. Glowinski and P. Le Tallec, Augmented Lagra ngian and Operatorsplitting Methods in Nonlinear Mechanics, Society for Industrial Mathematics, 1989.
  32. R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, 2008. View at MathSciNet
  33. L. Jacob, G. Obozinski, and J.-P. Vert, “Group lasso with overlap and graph lasso,” in Proceedings of the 26th International Conference On Machine Learning (ICML '09), pp. 433–440, ACM, June 2009. View at Scopus
  34. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  35. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  36. YALL1-Group: A solver for group/joint sparse reconstruction, http://www.convexoptimization.com/wikimization/index.php.
  37. J. Meng, W. Yin, H. Li, E. Hossain, and Z. Han, “Collaborative spectrum sensing from sparse observations in cognitive radio networks,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 2, pp. 327–337, 2011. View at Publisher · View at Google Scholar · View at Scopus
  38. T. S. Rappaport, Wireless Communications: Priciples and Practice, Prentice Hall, 2nd edition, 2002.