Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2014, Article ID 197476, 8 pages
http://dx.doi.org/10.1155/2014/197476
Research Article

Convergence Analysis of an Empirical Eigenfunction-Based Ranking Algorithm with Truncated Sparsity

1School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
2College of Information, Dalian University, Dalian 116622, China
3Beijing Key Laboratory of Multimedia Intelligent Software Technology, College of Metropolitan Transportation, Beijing University of Technology, Beijing 100124, China

Received 13 April 2014; Revised 8 July 2014; Accepted 9 July 2014; Published 21 July 2014

Academic Editor: Sergei V. Pereverzyev

Copyright © 2014 Min Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. S. Clemencon, G. Lugosi, and N. Vayatis, “Ranking and empirical minimization of U-statistics,” The Annals of Statistics, vol. 36, no. 2, pp. 844–874, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. S. Clémençon and N. Vayatis, “Ranking the best instances,” The Journal of Machine Learning Research, vol. 8, pp. 2671–2699, 2007. View at Google Scholar · View at MathSciNet · View at Scopus
  3. W. Rejchel, “On ranking and generalization bounds,” Journal of Machine Learning Research, vol. 13, pp. 1373–1392, 2012. View at Google Scholar · View at MathSciNet · View at Scopus
  4. A. Slivkins, F. Radlinski, and S. Gollapudi, “Ranked bandits in metric spaces: learning diverse rankings over large document collections,” Journal of Machine Learning Research, vol. 14, pp. 399–436, 2013. View at Google Scholar · View at MathSciNet · View at Scopus
  5. G. Blanchard, P. Massart, R. Vert, and L. Zwald, “Kernel projection machine: a new tool for pattern recognition,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS '04), pp. 1649–1656, 2004.
  6. H. Chen, H. Xiang, Y. Tang, Z. Yu, and X. Zhang, “Approximation analysis of empirical feature-based learning with truncated sparsity,” in Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR '12), pp. 118–124, Xi'an, China, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. X. Guo and D. X. Zhou, “An empirical feature-based learning algorithm producing sparse approximations,” Applied and Computational Harmonic Analysis, vol. 32, pp. 389–400, 2012. View at Publisher · View at Google Scholar
  8. S. Agarwal and P. Niyogi, “Generalization bounds for ranking algorithms via algorithmic stability,” Journal of Machine Learning Research, vol. 10, pp. 441–474, 2009. View at Google Scholar · View at MathSciNet · View at Scopus
  9. T. Hu, J. Fan, Q. Wu, and D. X. Zhou, “Learning theory approach to minimum error entropy criterion,” Journal of Machine Learning Research, vol. 14, pp. 377–397, 2013. View at Google Scholar · View at MathSciNet · View at Scopus
  10. N. Aronszajn, “Theory of reproducing kernels,” Transactions of the American Mathematical Society, vol. 68, pp. 337–404, 1950. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. F. Cucker and D. Zhou, Learning Theory: An Approximation Theory Viewpoint, vol. 24 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, UK, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  12. H. Chen, “The convergence rate of a regularized ranking algorithm,” Journal of Approximation Theory, vol. 164, no. 12, pp. 1513–1519, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. H. W. Sun and Q. Wu, “Indefinite kernel network with dependent sampling,” Analysis and Applications, vol. 11, no. 5, Article ID 1350020, 15 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  14. M. Xu, Q. Fang, and S. F. Wang, “On empirical eigenfunction-based ranking with l1 norm regularization,” Submitted.
  15. F. Bauer, S. Pereverzev, and L. Rosasco, “On regularization algorithms in learning theory,” Journal of Complexity, vol. 23, no. 1, pp. 52–72, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  16. S. Smale and D. Zhou, “Learning theory estimates via integral operators and their approximations,” Constructive Approximation, vol. 26, no. 2, pp. 153–172, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. S. Mukherjee and D. Zhou, “Learning coordinate covariances via gradients,” Journal of Machine Learning Research, vol. 7, pp. 519–549, 2006. View at Google Scholar · View at MathSciNet · View at Scopus
  18. R. Bhatia and L. Elsner, “The Hoffman-Wielandt inequality in infinite dimensions,” Proceedings of the Indian Academy of Sciences: Mathematical Sciences, vol. 104, no. 3, pp. 483–494, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. A. J. Hoffman and H. W. Wielandt, “The variation of the spectrum of a normal matrix,” Duke Mathematical Journal, vol. 20, pp. 37–39, 1953. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. T. Kato, “Variation of discrete spectra,” Communications in Mathematical Physics, vol. 111, no. 3, pp. 501–504, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  21. V. Koltchinskii and E. Giné, “Random matrix approximation of spectra of integral operators,” Bernoulli, vol. 6, no. 1, pp. 113–167, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. E. de Vito, A. Caponnetto, and L. Rosasco, “Model selection for regularized least-squares algorithm in learning theory,” Foundations of Computational Mathematics, vol. 5, no. 1, pp. 59–85, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus