Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2015, Article ID 316012, 14 pages
http://dx.doi.org/10.1155/2015/316012
Research Article

A Performance Study of a Dual Xeon-Phi Cluster for the Forward Modelling of Gravitational Fields

1ABACUS-CINVESTAV-IPN, Apartado Postal 14-740, 07000 México City, DF, Mexico
2Centro de Desarrollo Aeroespacial del Instituto Politécnico Nacional, Belisario Domínguez 22, 06010 México City, DF, Mexico
3Escuela Superior de Física y Matemáticas, Av. Instituto Politécnico Nacional Edificio 9, Unidad Profesional Adolfo López Mateos, 07738 México City, DF, Mexico
4Instituto Mexicano del Petróleo, Eje Central Lázaro Cardenas No. 152, 07730 México City, DF, Mexico
5Department of Industrial Engineering, Campus Celaya-Salvatierra, University of Guanajuato, Mutualismo 303 Colonia Suiza, 38060 Celaya, Gto, Mexico

Received 31 December 2014; Revised 27 May 2015; Accepted 8 June 2015

Academic Editor: Enrique S. Quintana-Ortí

Copyright © 2015 Maricela Arroyo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. V. W. Lee, C. Kim, J. Chhugani et al., “Debunking the 100X GPU vs. CPU Myth: an evaluation of throughput computing on CPU and GPU,” in Proceedings of the 37th Annual International Symposium on Computer Architecture (ISCA '10), vol. 38, pp. 451–460, ACM, Saint-Malo, France, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Wienke, D. Plotnikov, D. An Mey et al., “Simulation of bevel gear cutting with GPGPUs—performance and productivity,” Computer Science—Research and Development, vol. 26, no. 3-4, pp. 165–174, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. K. W. Schulz, R. Ulerich, N. Malaya, P. T. Bauman, R. Stogner, and C. Simmons, “Early experiences porting scientific applications to the many integrated core (mic) platform,” in Proceedings of the TACC-Intel Highly Parallel Computing Symposium, Austin, Tex, USA, 2012.
  4. G. Chrysos, “Intel Xeon Phi coprocessors—the architecture,” Intel Whitepaper, 2014. View at Google Scholar
  5. R. Bell, “Gradiometr ía de gravedad,” Investigación y Ciencia: Edición Española de Scientific American, no. 263, pp. 50–55, 1998. View at Google Scholar
  6. R. Tenzer and P. Novák, “Effect of crustal density structures on GOCE gravity gradient observables,” Terrestrial, Atmospheric and Oceanic Sciences, vol. 24, no. 5, pp. 793–807, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. B. Heck and K. Seitz, “A comparison of the tesseroid, prism and point-mass approaches for mass reductions in gravity field modelling,” Journal of Geodesy, vol. 81, no. 2, pp. 121–136, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  8. D. Nagy, G. Papp, and J. Benedek, “The gravitational potential and its derivatives for the prism,” Journal of Geodesy, vol. 74, no. 7-8, pp. 552–560, 2000. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Couder-Castañeda, J. C. Ortiz-Alemán, M. G. Orozco-del-Castillo, and M. Nava-Flores, “Forward modeling of gravitational fields on hybrid multi-threaded cluster,” Geofísica Internacional, vol. 54, no. 1, pp. 31–48, 2015. View at Publisher · View at Google Scholar
  10. C. Couder-Castañeda, C. Ortiz-Alemán, M. G. Orozco-Del-Castillo, and M. Nava-Flores, “TESLA GPUs versus MPI with OpenMP for the forward modeling of gravity and gravity gradient of large prisms ensemble,” Journal of Applied Mathematics, vol. 2013, Article ID 437357, 15 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Zhang, M. Burcea, V. Cheng, R. Ho, and M. Voss, “An adaptive OpenMP loop scheduler for hyperthreaded SMPs,” in Proceedings of the International Conference on Parallel and Distributed Computing Systems (PDCS '04), 2004.
  12. O. Boulanger and M. Chouteau, “Constraints in 3D gravity inversion,” Geophysical Prospecting, vol. 49, no. 2, pp. 265–280, 2001. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Čuma, G. A. Wilson, and M. S. Zhdanov, “Large-scale 3D inversion of potential field data,” Geophysical Prospecting, vol. 60, no. 6, pp. 1186–1199, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. M. S. Zhdanov, R. Ellis, and S. Mukherjee, “Three-dimensional regularized focusing inversion of gravity gradient tensor component data,” Geophysics, vol. 69, no. 4, pp. 925–937, 2004. View at Publisher · View at Google Scholar · View at Scopus
  15. L. B. Pedersen and T. M. Rasmussen, “The gradient tensor of potential field anomalies: some implications on data collection and data processing of maps,” Geophysics, vol. 55, no. 12, pp. 1558–1566, 1990. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Reinders, An Overview of Programming for Intel Xeon Processors and Intel Xeon Phi Coprocessors, Intel Corporation, Santa Clara, Calif, USA, 2012.
  17. G. E. Allen and B. L. Evans, “Real-time sonar beamforming on workstations using process networks and POSIX threads,” IEEE Transactions on Signal Processing, vol. 48, no. 3, pp. 921–926, 2000. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Curtis-Maury, X. Ding, C. D. Antonopoulos, and D. S. Nikolopoulos, “An evaluation of OpenMP on current and emerging multithreaded/multicore processors,” in OpenMP Shared Memory Parallel Programming: Proceedings of the International Workshops, IWOMP 2005 and IWOMP 2006, Eugene, OR, USA, June 1–4, 2005, Reims, France, June 12–15, 2006, vol. 4315 of Lecture Notes in Computer Science, pp. 133–144, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar
  19. R. D. Blumofe, C. F. Joerg, B. C. Kuszmaul, C. E. Leiserson, K. H. Randall, and Y. Zhou, “Cilk: an efficient multithreaded runtime system,” Journal of Parallel and Distributed Computing, vol. 37, no. 1, pp. 55–69, 1996. View at Publisher · View at Google Scholar · View at Scopus
  20. J. E. Stone, D. Gohara, and G. Shi, “OpenCL: a parallel programming standard for heterogeneous computing systems,” Computing in Science and Engineering, vol. 12, no. 3, Article ID 5457293, pp. 66–72, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Dagum and R. Menon, “OpenMP: an industry standard API for shared-memory programming,” IEEE Computational Science and Engineering, vol. 5, no. 1, pp. 46–55, 1998. View at Publisher · View at Google Scholar
  22. M. Curtis-Maury, X. Ding, C. D. Antonopoulos, and D. S. Nikolopoulos, “An evaluation of openmp on current and emerging multithreaded/multicore processors,” in OpenMP Shared Memory Parallel Programming: International Workshops, IWOMP 2005 and IWOMP 2006, Eugene, OR, USA, June 1–4, 2005, Reims, France, June 12–15, 2006. Proceedings, vol. 4315 of Lecture Notes in Computer Science, pp. 133–144, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar
  23. T. Cramer, D. Schmidl, M. Klemm, and D. an Mey, “OpenMP programming on Intel Xeon Phi coprocessors: an early performance comparison,” in Proceedings of the Many-Core Applications Research Community Symposium (MARC '12), pp. 38–44, RWTH Aachen University, 2012.
  24. D. Schmidl, C. Terboven, D. an Mey, and M. Bücker, “Binding nested openmp programs on hierarchical memory architectures,” in Beyond Loop Level Parallelism in OpenMP: Accelerators, Tasking and More, M. Sato, T. Hanawa, M. Müller, B. Chapman, and B. de Supinski, Eds., vol. 6132 of Lecture Notes in Computer Science, pp. 29–42, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar
  25. U. Ranok, S. Kittitornkun, and S. Tongsima, “A multithreading methodology with OpenMP on multi-core CPUs: SNPHAP case study,” in Proceedings of the 8th Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI '11), pp. 459–463, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. C. Leggett, S. Binet, K. Jackson, D. Levinthal, M. Tatarkhanov, and Y. Yao, “Parallelizing ATLAS reconstruction and simulation: issues and optimization solutions for scaling on multi- and many-CPU Platforms,” Journal of Physics: Conference Series, vol. 331, no. 4, Article ID 42015, 2011. View at Publisher · View at Google Scholar · View at Scopus