Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 913897, 14 pages
http://dx.doi.org/10.1155/2014/913897
Research Article

Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

1School of Information Science and Technology, Xiamen University, 422 Si Ming South Road, Xiamen, Fujian 361005, China
2School of Science and Technology, The Open University of Hong Kong, 30 Good Shepherd Street, Ho Man Tin, Kowloon, Hong Kong

Received 2 November 2013; Revised 28 February 2014; Accepted 28 February 2014; Published 30 March 2014

Academic Editor: Wei Bian

Copyright © 2014 Yunfeng Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. T. J. Rivlin, An Introduction to the Approximation of Functions, Dover, Mineola, NY, USA, 1981.
  2. N. I. Achieser, Theory of Approximation, Dover, Mineola, NY, USA, 2004.
  3. J. Ramsay and B. Silverman, Functional Data Analysis, Springer, New York, NY, USA, 1997.
  4. R. M. Rangayyan and Y. F. Wu, “Screening of knee-joint vibroarthrographic signals using statistical parameters and radial basis functions,” Medical and Biological Engineering and Computing, vol. 46, no. 3, pp. 223–232, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. R. M. Rangayyan and Y. Wu, “Analysis of vibroarthrographic signals with features related to signal variability and radial-basis functions,” Annals of Biomedical Engineering, vol. 37, no. 1, pp. 156–163, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. R. J. Schilling, J. J. Carroll Jr., and A. F. Al-Ajlouni, “Approximation of nonlinear systems with radial basis function neural networks,” IEEE Transactions on Neural Networks, vol. 12, no. 1, pp. 1–15, 2001. View at Publisher · View at Google Scholar · View at Scopus
  7. M. D. Buhmann, Radial Basis Functions, Cambridge University, Cambridge, UK, 2003.
  8. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989. View at Google Scholar · View at Scopus
  9. J. Park and I. W. Sandberg, “Universal approximation using radialbasis- function networks,” Neural Computation, vol. 3, no. 2, pp. 246–257, 1991. View at Google Scholar
  10. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, Englewood Cliffs, NJ, USA, 2nd edition, 1998.
  11. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993–1001, 1990. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Hashem and B. Schmeiser, “Improving model accuracy using optimal linear combinations of trained neural networks,” IEEE Transactions on Neural Networks, vol. 6, no. 3, pp. 792–794, 1995. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Woods, W. Philip Kegelmeyer, and K. Bowyer, “Combination of multiple classifiers using local accuracy estimates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, pp. 405–410, 1997. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, “On combining classifiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226–239, 1998. View at Publisher · View at Google Scholar · View at Scopus
  15. L. I. Kuncheva, “A theoretical study on six classifier fusion strategies,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 281–286, 2002. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Rahman and B. Verma, “Novel layered clustering-based approach for generating ensemble of classifiers,” IEEE Transactions on Neural Networks, vol. 22, no. 5, pp. 781–792, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Wu and J. I. Arribas, “Fusing output information in neural networks: ensemble performs better,” in Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '03), pp. 2265–2268, September 2003. View at Scopus
  18. G. Valentini and F. Masulli, “Ensembles of learning machines,” in Neural Nets, vol. 2486 of Lecture Notes in Computer Science, pp. 3–20, 2002. View at Google Scholar
  19. A. Sinha, H. Chen, D. G. Danu, T. Kirubarajan, and M. Farooq, “Estimation and decision fusion: a survey,” Neurocomputing, vol. 71, no. 13–15, pp. 2650–2656, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. N. Ueda, “Optimal linear combination of neural networks for improving classification performance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 2, pp. 207–215, 2000. View at Publisher · View at Google Scholar · View at Scopus
  21. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119–139, 1997. View at Google Scholar · View at Scopus
  22. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Google Scholar · View at Scopus
  23. G. Ridgeway, “The state of boosting,” Computing Science and Statistics, vol. 31, pp. 172–181, 1999. View at Google Scholar
  24. B. Efron and R. Tibshirani, An Introduction to the Bootstrap, Chapman and Hall, New York, NY, USA, 1993.
  25. G. Fumera and F. Roli, “A theoretical and experimental analysis of linear combiners for multiple classifier systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 942–956, 2005. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Wu, J. He, Y. Man, and J. I. Arribas, “Neural network fusion strategies for identifying breast masses,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '04), pp. 2437–2442, July 2004. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Wu, C. Wang, S. C. Ng, A. Madabhushi, and Y. Zhong, “Breast cancer diagnosis using neural-based linear fusion strategies,” in Neural Information Processing, vol. 4234 of Lecture Notes in Computer Science, pp. 165–175, 2006. View at Google Scholar
  28. Y. Wu and S. C. Ng, “Breast tissue classification based on unbiased linear fusion of neural networks with normalized weighted average algorithm,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '07), pp. 2846–2850, Orlando, Fla, USA, August 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Wu and C. Wang, “Linear least-squares fusion of multilayer perceptrons for protein localization sites prediction,” in Proceedings of the IEEE 32nd Annual Northeast Bioengineering Conference, pp. 157–158, Easton, Pa, USA, April 2006. View at Scopus
  30. Y. Wu, Y. Ma, X. Liu, and C. Wang, “A bootstrap-based linear classifierfusion system for protein subcelluar location prediction,” in Proceedings of the 28th Annual International Conference of IEEE Engineering in Medicine and Biology Society (EMBC '06), pp. 4229–4232, New York, NY, USA, 2006.
  31. Y. Wu and S. C. Ng, “Unbiased linear neural-based fusion with normalized weighted average algorithm for regression,” in Advances in Neural Networks, vol. 4493 of Lecture Notes in Computer Science, pp. 664–670, 2007. View at Google Scholar
  32. Y. Wu, Y. Zhou, S.-C. Ng, and Y. Zhong, “Combining neural-based regression predictors using an unbiased and normalized linear ensemble model,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '08), pp. 3955–3960, Hong Kong, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  33. D. Opitz and R. Maclin, “Popular ensemble methods: An Empirical Study,” Journal of Artificial Intelligence Research, vol. 11, pp. 169–198, 1999. View at Google Scholar · View at Scopus
  34. K. Tumer and J. Ghosh, “Analysis of decision boundaries in linearly combined neural classifiers,” Pattern Recognition, vol. 29, no. 2, pp. 341–348, 1996. View at Publisher · View at Google Scholar · View at Scopus
  35. V. Tresp and M. Taniguchi, “Combining estimators using non-constant weighting functions,” in Advances in Neural Information Processing Systems, G. Tasuaro, D. Touretzky, and T. Leen, Eds., vol. 7, pp. 419–426, MIT Press, Cambridge, Mass, USA, 1995. View at Google Scholar
  36. Z.-H. Zhou, J. Wu, and W. Tang, “Ensembling neural networks: many could be better than all,” Artificial Intelligence, vol. 137, no. 1-2, pp. 239–263, 2002. View at Publisher · View at Google Scholar · View at Scopus
  37. Y. Wu and S.-C. Ng, “Adaptively fusing neural network predictors toward higher accuracy: a case study,” in Proceedings of the IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA '09), pp. 273–276, Hong Kong, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  38. Y. Wu and S. Krishnan, “Combining least-squares support vector machines for classification of biomedical signals: a case study with knee-joint vibroarthrographic signals,” Journal of Experimental and Theoretical Artificial Intelligence, vol. 23, no. 1, pp. 63–77, 2011. View at Publisher · View at Google Scholar · View at Scopus
  39. Y. Wu, S. Cai, M. Lu, and S. Krishnan, “An artificial-neural-networkbased multiple classifier system for knee-joint vibration signal classification,” in Advances in Computer, Communication, Control and Automation, Y. W. Wu, Ed., vol. 121 of Lecture Notes in Electrical Engineering, pp. 235–242, Springer, Heidelberg, Germany, 2011. View at Google Scholar
  40. S. Cai, S. Yang, F. Zheng, M. Lu, Y. Wu, and S. Krishnan, “Knee joint vibration signal analysis with matching pursuit decomposition and dynamic weighted classifier fusion,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 904267, 11 pages, 2013. View at Publisher · View at Google Scholar
  41. S. G. Nash and A. Sofer, Linear and Nonlinear Programming, McGraw-Hill, Columbus, Ohio, USA, 1995.
  42. A. Asuncion and D. J. Newman, “UCI machine learning repository,” 2007, http://archive.ics.uci.edu/ml/.
  43. S. Chen, C. F. N. Cowan, and P. M. Grant, “Orthogonal least squares learning algorithm for radial basis function networks,” IEEE Transactions on Neural Networks, vol. 2, no. 2, pp. 302–309, 1991. View at Publisher · View at Google Scholar · View at Scopus
  44. Z. Wang and A. C. Bovik, “Mean squared error: lot it or leave it? A new look at signal fidelity measures,” IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 98–117, 2009. View at Publisher · View at Google Scholar · View at Scopus
  45. C. M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, NY, USA, 2006.