Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 16, Issue 2-3, Pages 255-270

Large-Scale Phylogenetic Analysis on Current HPC Architectures

Michael Ott,1 Jaroslaw Zola,2 Srinivas Aluru,2 Andrew D. Johnson,3,4 Daniel Janies,3 and Alexandros Stamatakis5

1Department of Computer Science, Technical University of Munich, Munich, Germany
2Department of Electrical and Computer Engineering, Iowa State University, IA, USA
3Department of Biomedical Informatics, The Ohio State University Medical Center, OH, USA
4Framingham Heart Study, National Heart Lung and Blood Institute, MD, USA
5The Exelixis Lab, Department of Computer Science, Ludwig-Maximilians-University Munich, Munich, Germany

Copyright © 2008 Hindawi Publishing Corporation. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Phylogenetic inference is considered a grand challenge in Bioinformatics due to its immense computational requirements. The increasing popularity and availability of large multi-gene alignments as well as comprehensive datasets of single nucleotide polymorphisms (SNPs) in current biological studies, coupled with rapid accumulation of sequence data in general, pose new challenges for high performance computing. By example of RAxML, which is currently among the fastest and most accurate programs for phylogenetic inference under the Maximum Likelihood (ML) criterion, we demonstrate how the phylogenetic ML function can be efficiently scaled to current supercomputer architectures like the IBM BlueGene/L (BG/L) and SGI Altix. This is achieved by simultaneous exploitation of coarse- and fine-grained parallelism which is inherent to every ML-based biological analysis. Performance is assessed using datasets consisting of 270 sequences and 566,470 base pairs (haplotype map dataset), and 2,182 sequences and 51,089 base pairs, respectively. To the best of our knowledge, these are the largest datasets analyzed under ML to date. Experimental results indicate that the fine-grained parallelization scales well up to 1,024 processors. Moreover, a larger number of processors can be efficiently exploited by a combination of coarse- and fine-grained parallelism. We also demonstrate that our parallelization scales equally well on an AMD Opteron cluster with a less favorable network latency to processor speed ratio. Finally, we underline the practical relevance of our approach by including a biological discussion of the results from the haplotype map dataset analysis, which revealed novel biological insights via phylogenetic inference.