Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2015, Article ID 297672, 13 pages
http://dx.doi.org/10.1155/2015/297672
Research Article

MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

1School of Electrical Engineering and Information, Sichuan University, Chengdu 610065, China
2The Key Laboratory of Embedded Systems and Service Computing, Tongji University, Shanghai 200092, China
3Department of Computing, Canterbury Christ Church University, Canterbury, Kent CT1 1QU, UK

Received 28 May 2015; Accepted 11 August 2015

Academic Editor: Ladislav Hluchy

Copyright © 2015 Yang Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. Networked European Software and Services Initiative (NESSI), “Big data, a new world of opportunities,” Networked European Software and Services Initiative (NESSI) White Paper, 2012, http://www.nessi-europe.com/Files/Private/NESSI_WhitePaper_BigData.pdf. View at Google Scholar
  2. P. C. Zikopoulos, C. Eaton, D. deRoos, T. Deutsch, and G. Lapis, Understanding Big Data, Analytics for Enterprise Class Hadoop and Streaming Data, McGraw-Hill, 2012.
  3. M. H. Hagan, H. B. Demuth, and M. H. Beale, Neural Network Design, PWS Publishing, 1996.
  4. R. Gu, F. Shen, and Y. Huang, “A parallel computing platform for training large scale neural networks,” in Proceedings of the IEEE International Conference on Big Data, pp. 376–384, October 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. V. Kumar, A. Grama, A. Gupta, and G. Karypis, Introduction to Parallel Computing, Benjamin Cummings/Addison Wesley, San Francisco, Calif, USA, 2002.
  6. Message Passing Interface, 2015, http://www.mcs.anl.gov/research/projects/mpi/.
  7. L. N. Long and A. Gupta, “Scalable massively parallel artificial neural networks,” Journal of Aerospace Computing, Information and Communication, vol. 5, no. 1, pp. 3–15, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Dean and S. Ghemawat, “MapReduce: simplified data processing on large clusters,” Communications of the ACM, vol. 51, no. 1, pp. 107–113, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Liu, M. Li, M. Khan, and M. Qi, “A mapreduce based distributed LSI for scalable information retrieval,” Computing and Informatics, vol. 33, no. 2, pp. 259–280, 2014. View at Google Scholar · View at Scopus
  10. N. K. Alham, M. Li, Y. Liu, and M. Qi, “A MapReduce-based distributed SVM ensemble for scalable image classification and annotation,” Computers and Mathematics with Applications, vol. 66, no. 10, pp. 1920–1934, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Jiang, J. Zhang, G. Yang, D. Zhang, and L. Zhang, “Application of back propagation neural network in the classification of high resolution remote sensing image: take remote sensing image of Beijing for instance,” in Proceedings of the 18th International Conference on Geoinformatics, pp. 1–6, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. N. L. D. Khoa, K. Sakakibara, and I. Nishikawa, “Stock price forecasting using back propagation neural networks with time and profit based adjusted weight factors,” in Proceedings of the SICE-ICASE International Joint Conference, pp. 5484–5488, IEEE, Busan, Republic of korea, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Rizwan, M. Jamil, and D. P. Kothari, “Generalized neural network approach for global solar energy estimation in India,” IEEE Transactions on Sustainable Energy, vol. 3, no. 3, pp. 576–584, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. Y. Wang, B. Li, R. Luo, Y. Chen, N. Xu, and H. Yang, “Energy efficient neural networks for big data analytics,” in Proceedings of the 17th International Conference on Design, Automation and Test in Europe (DATE '14), March 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” in Proceedings of the International Joint Conference on Neural Networks, vol. 3, pp. 21–26, June 1990. View at Scopus
  16. H. Kanan and M. Khanian, “Reduction of neural network training time using an adaptive fuzzy approach in real time applications,” International Journal of Information and Electronics Engineering, vol. 2, no. 3, pp. 470–474, 2012. View at Google Scholar
  17. A. A. Ikram, S. Ibrahim, M. Sardaraz, M. Tahir, H. Bajwa, and C. Bach, “Neural network based cloud computing platform for bioinformatics,” in Proceedings of the 9th Annual Conference on Long Island Systems, Applications and Technology (LISAT '13), pp. 1–6, IEEE, Farmingdale, NY, USA, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. V. Rao and S. Rao, “Application of artificial neural networks in capacity planning of cloud based IT infrastructure,” in Proceedings of the 1st IEEE International Conference on Cloud Computing for Emerging Markets (CCEM '12), pp. 38–41, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. A. A. Huqqani, E. Schikuta, and E. Mann, “Parallelized neural networks as a service,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '14), pp. 2282–2289, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Yuan and S. Yu, “Privacy preserving back-propagation neural network learning made practical with cloud computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 1, pp. 212–221, 2014. View at Publisher · View at Google Scholar
  21. Z. Liu, H. Li, and G. Miao, “MapReduce-based back propagation neural network over large scale mobile data,” in Proceedings of the 6th International Conference on Natural Computation (ICNC '10), pp. 1726–1730, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. B. He, W. Fang, Q. Luo, N. K. Govindaraju, and T. Wang, “Mars: a MapReduce framework on graphics processors,” in Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques (PACT '08), pp. 260–269, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. K. Taura, K. Kaneda, T. Endo, and A. Yonezawa, “Phoenix: a parallel programming model for accommodating dynamically joining/leaving resources,” ACM SIGPLAN Notices, vol. 38, no. 10, pp. 216–229, 2003. View at Google Scholar · View at Scopus
  24. Apache Hadoop, 2015, http://hadoop.apache.org.
  25. J. Venner, Pro Hadoop, Springer, New York, NY, USA, 2009.
  26. N. K. Alham, Parallelizing support vector machines for scalable image annotation [Ph.D. thesis], Brunel University, Uxbridge, UK, 2011.
  27. The Iris Dataset, 2015, https://archive.ics.uci.edu/ml/datasets/Iris.