Managing Variant Calling Files the Big Data Way: Using HDFS and Apache Parquet

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

2 Citations (Scopus)

Abstract

Big Data has been seen as a remedy for the efficient management of the ever-increasing genomic data. In this paper, we investigate the use of Apache Spark to store and process Variant Calling Files (VCF) on a Hadoop cluster. We demonstrate Tomatula, a software tool for converting VCF files to Apache Parquet storage format, and an application to query variant calling datasets. We evaluate how the wall time (i.e. time until the query answer is returned to the user) scales out on a Hadoop cluster storing VCF files, either in the original flat-file format, or using the Apache Parquet columnar storage format. Apache Parquet can compress the VCF data by around a factor of 10, and supports easier querying of VCF files as it exposes the field structure. We discuss advantages and disadvantages in terms of storage capacity and querying performance with both flat VCF files and Apache Parquet using an open plant breeding dataset. We conclude that Apache Parquet offers benefits for reducing storage size and wall time, and scales out with larger datasets.
Original languageEnglish
Title of host publicationBDCAT '17 Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies
PublisherACM
Pages219-226
ISBN (Electronic)9781450355490
DOIs
Publication statusPublished - 2017
EventFourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies - Austin, United States
Duration: 5 Dec 20178 Dec 2017
Conference number: 4

Conference

ConferenceFourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies
CountryUnited States
CityAustin
Period5/12/178/12/17

Keywords

  • Big Data
  • bioinformatics
  • variant calling
  • Hadoop
  • HDFS
  • Apache Spark
  • Apache Parquet

Fingerprint Dive into the research topics of 'Managing Variant Calling Files the Big Data Way: Using HDFS and Apache Parquet'. Together they form a unique fingerprint.

Cite this