Table of Contents
ISRN Signal Processing
Volume 2013 (2013), Article ID 748914, 11 pages
http://dx.doi.org/10.1155/2013/748914
Research Article

A Novel Neuron in Kernel Domain

1Department of Computer Science, Ferdowsi University of Mashhad, Mashhad, Iran
2Center of Excellence on Soft Computing and Intelligent Information Processing, Ferdowsi University of Mashhad, Iran

Received 23 June 2013; Accepted 22 July 2013

Academic Editors: A. Krzyzak and L. Zhang

Copyright © 2013 Zahra Khandan and Hadi Sadoghi Yazdi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. B. Widrow, “Adaptive filters I: fundamentals,” Tech. Rep. SEL-66-126 (TR-6764-6), Stanford Electronic Laboratories, Stanford, Calif, USA, 1966. View at Google Scholar
  2. J. Kivinenm, A. J. Smola, and R. C. Williamson, Online Learning with Kernels, IEEE, New York, NY, USA, 2004.
  3. P. P. Pokharel, L. Weifeng, and J. C. Principe, “Kernel LMS,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), pp. III1421–III1424, Honolulu. Hawaii, USA, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Liu, P. P. Pokharel, and J. C. Principe, “The kernel least-mean-square algorithm,” IEEE Transactions on Signal Processing, vol. 56, no. 2, pp. 543–554, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Sch?lkopf and A. J. Smola, Learning With Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, Mass, USA, 2002.
  6. D. Erdogmus and J. C. Principe, “Generalized information potential criterion for adaptive system training,” IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1035–1044, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Herbrich, Learning Kernel Classifiers: Theory and Algorithms, MIT Press, Cambridge, Mass, USA, 2002.
  8. V. Vapnik, Statistical Learning Theory, Wiley, New York, NY, USA, 1998.
  9. Q. Chang, Q. Chen, and X. Wang, “Scaling Gaussian RBF kernel width to improve SVM classification,” in Proceedings of the International Conference on Neural Networks and Brain Proceedings (ICNNB '05), pp. 19–22, Beijing, China, October 2005. View at Scopus
  10. Y. Baram, “Learning by kernel polarization,” Neural Computation, vol. 17, no. 6, pp. 1264–1275, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Xiong, M. N. S. Swamy, and M. O. Ahmad, “Optimizing the kernel in the empirical feature space,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 460–474, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Singh and J. C. Príncipe, “Kernel width adaptation in information theoretic cost functions,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 2062–2065, Dallas, Tex, USA, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparce approximation,” Neurocomputing, vol. 48, pp. 85–105, 2002. View at Publisher · View at Google Scholar · View at Scopus
  14. J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, and J. Vandewalle, Least Squares Support Vector Machine, World Scientific Publishing, River Edge, NJ, USA, 2002.
  15. B. J. De Kruif and T. J. A. De Vries, “Pruning error minimization in least squares support vector machines,” IEEE Transactions on Neural Networks, vol. 14, no. 3, pp. 696–702, 2003. View at Publisher · View at Google Scholar · View at Scopus
  16. G. C. Cawley and N. L. C. Talbot, “Improved sparse least-squares support vector machines,” Neurocomputing, vol. 48, pp. 1025–1031, 2002. View at Publisher · View at Google Scholar · View at Scopus
  17. L. Hoegaerts, Eigenspace methods and subset selection in kernel based learning [Ph.D. thesis], Katholieke Universiteit Leuven, Leuven, Belgium, 2005.
  18. L. Hoegaerts, J. A. K. Suykens, J. Vandewalle, and B. De Moor, “Subset based least squares subspace regression in RKHS,” Neurocomputing, vol. 63, pp. 293–323, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Engel, S. Mannor, and R. Meir, “The kernel recursive least-squares algorithm,” IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2275–2285, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. P. P. Pokharel, W. Liu, and J. C. Principe, “Kernel least mean square algorithm with constrained growth,” Signal Processing, vol. 89, no. 3, pp. 257–265, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Van Vaerenbergh, J. Vía, and I. Santamaría, “A sliding-window kernel RLS algorithm and its application to nonlinear channel identification,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), pp. V789–V792, May 2006. View at Scopus
  22. P. Honeine, C. Richard, and J. C. Bermudez, “On-line nonlinear sparse approximation of functions,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT '07), pp. 956–960, Nice, France, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. C. Richard, J. C. M. Bermudez, and P. Honeine, “Online prediction of time series data with kernels,” IEEE Transactions on Signal Processing, vol. 57, no. 3, pp. 1058–1067, 2009. View at Publisher · View at Google Scholar · View at Scopus
  24. H. J. Bierens, “Introduction to Hilbert Spaces,” Lecture notes, 2007.
  25. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958. View at Publisher · View at Google Scholar · View at Scopus
  26. B. Schölkopf, S. Mika, C. J. C. Burges et al., “Input space versus feature space in kernel-based methods,” IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 1000–1017, 1999. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Li and P. M. Long, “The relaxed online maximum margin algorithm,” Machine Learning, vol. 46, no. 1–3, pp. 361–387, 2002. View at Publisher · View at Google Scholar · View at Scopus
  28. C. Gentile, “A new approximate maximal margin classification algorithm,” Journal of Machine Learning Research, vol. 2, pp. 213–242, 2002. View at Google Scholar
  29. K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer, “Online passive-aggressive algorithms,” Journal of Machine Learning Research, vol. 7, pp. 551–585, 2006. View at Google Scholar · View at Scopus