Skip to main content
Open Access Publications from the University of California

Extreme scale de novo metagenome assembly.

  • Author(s): Georganas, Evangelos;
  • Egan, Rob;
  • Hofmeyr, Steven A;
  • Goltsman, Eugene;
  • Arndt, Bill;
  • Tritt, Andrew;
  • Buluç, Aydin;
  • Oliker, Leonid;
  • Yelick, Katherine A
  • et al.

Published Web Location
No data is associated with this publication.

Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-of-the-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads - size 2.6 TBytes.

Item not freely available? Link broken?
Report a problem accessing this item