Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2013, Article ID 768404, 10 pages
http://dx.doi.org/10.1155/2013/768404
Research Article

Iterative Reweighted Noninteger Norm Regularizing SVM for Gene Expression Data Classification

1Department of Automation, China University of Petroleum, Beijing 102249, China
2Beijing Aerospace Propulsion Institute, Beijing 10076, China

Received 8 May 2013; Accepted 26 June 2013

Academic Editor: Seiya Imoto

Copyright © 2013 Jianwei Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Amaldi and V. Kann, “On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems,” Theoretical Computer Science, vol. 209, no. 1-2, pp. 237–260, 1998. View at Google Scholar · View at Scopus
  3. J. Zhu, T. Hastie, S. Rosset, and R. Tibshirani, “norm support vector machines,” in Proceedings of the 16th Annual Conference on Neural Information Processing Systems, pp. 145–146, MIT Press, Vancouver, Canada, 2003.
  4. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at Google Scholar · View at Scopus
  5. E. Candès and T. Tao, “Rejoinder: the dantzig selector: statistical estimation when p is much larger than n,” Annals of Statistics, vol. 35, no. 6, pp. 2392–2404, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. J. A. Tropp, “Just relax: convex programming methods for identifying sparse signals in noise,” IEEE Transactions on Information Theory, vol. 52, no. 3, pp. 1030–1051, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Miller, Subset Selection in Regression, Chapman and Hall, London, UK, 2002.
  8. M. J. Wainwright, “Sharp thresholds for high-dimensional and noisy sparsity recovery using 1-constrained quadratic programming (Lasso),” IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2183–2202, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. P. S. Bradley and O. L. Mangasarian, “Feature selection via concave minimization and support vector machines,” in Proceedings of the 15th International Conference on Machine Learning (ICML '98), pp. 82–90, Morgan Kaufmann, Madison, Wisconsin, USA, 1998.
  11. A. Kaban and R. J. Durrant, “Learning with 0 < Lq < 1 vs L1-norm regularization with exponentially many irrelevant features,” in Proceedings of the 19th European Conference on Machine Learning (ECML '08), pp. 580–596, Antwerp, Belgium, 2008.
  12. R. M. Leahy and B. D. Jeffs, “On the design of maximally sparse beamforming arrays,” IEEE Transactions on Antennas and Propagation, vol. 39, no. 8, pp. 1178–1187, 1991. View at Publisher · View at Google Scholar · View at Scopus
  13. G. Gasso, A. Rakotomamonjy, and S. Canu, “Recovering sparse signals with a certain family of nonconvex penalties and DC programming,” IEEE Transactions on Signal Processing, vol. 57, no. 12, pp. 4686–4698, 2009. View at Publisher · View at Google Scholar · View at Scopus
  14. I. Frank and J. Friedman, “A statistical view of som chemometrics regression tools (with discussion),” Technometrics, vol. 35, pp. 109–148, 1993. View at Google Scholar
  15. W. J. Fu, “Penalized regressions: the bridge versus the lasso,” Journal of Computational and Graphical Statistics, vol. 7, no. 3, pp. 397–416, 1998. View at Google Scholar · View at Scopus
  16. S. Foucart and M.-J. Lai, “Sparsest solutions of underdetermined linear systems via q-minimization for 0 < q ≤ 1,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395–407, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Gribonval and M. Nielsen, “Highly sparse representations from dictionaries are unique and independent of the sparseness measure,” Applied and Computational Harmonic Analysis, vol. 22, no. 3, pp. 335–355, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. K. Knight and W. Fu, “Asymptotics for Lasso-type estimators,” Annals of Statistics, vol. 28, no. 5, pp. 1356–1378, 2000. View at Google Scholar · View at Scopus
  19. J. Huang, J. L. Horowitz, and S. Ma, “Asymptotic properties of bridge estimators in sparse high-dimensional regression models,” Annals of Statistics, vol. 36, no. 2, pp. 587–613, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3869–3872, Las Vegas, Nev, USA, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,” Inverse Problems, vol. 24, no. 3, Article ID 035020, 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. J. Weston, A. Elisseeff, B. Scholkopf, and M. Tipping, “Use of the zero-norm with linear models and kernel methods,” Journal of Machine Learning Research, vol. 3, pp. 1439–1461, 2003. View at Google Scholar