Table of Contents Author Guidelines Submit a Manuscript
International Journal of Antennas and Propagation
Volume 2012 (2012), Article ID 280359, 6 pages
Research Article

The Case for Higher Computational Density in the Memory-Bound FDTD Method within Multicore Environments

Electrical Engineering Department, Kuwait University, P.O. Box 5969, Safat 13060, Kuwait

Received 6 October 2011; Revised 27 January 2012; Accepted 31 January 2012

Academic Editor: Stefano Selleri

Copyright © 2012 Mohammed F. Hadi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.