Affiliation:
1. Energy Mining and Environment Research Centre, National Research Council Canada , Montreal, QC, Canada H4P-2R2
Abstract
Abstract
In shotgun metagenomics (SM), the state-of-the-art bioinformatic workflows are referred to as high-resolution shotgun metagenomics (HRSM) and require intensive computing and disk storage resources. While the increase in data output of the latest iteration of high-throughput DNA sequencing systems can allow for unprecedented sequencing depth at a minimal cost, adjustments in HRSM workflows will be needed to properly process these ever-increasing sequence datasets. One potential adaptation is to generate so-called shallow SM datasets that contain fewer sequencing data per sample as compared with the more classic high coverage sequencing. While shallow sequencing is a promising avenue for SM data analysis, detailed benchmarks using real-data are lacking. In this case study, we took four public SM datasets, one massive and the others moderate in size and subsampled each dataset at various levels to mimic shallow sequencing datasets of various sequencing depths. Our results suggest that shallow SM sequencing is a viable avenue to obtain sound results regarding microbial community structures and that high-depth sequencing does not bring additional elements for ecological interpretation. More specifically, results obtained by subsampling as little as 0.5 M sequencing clusters per sample were similar to the results obtained with the largest subsampled dataset for human gut and agricultural soil datasets. For an Antarctic dataset, which contained only a few samples, 4 M sequencing clusters per sample was found to generate comparable results to the full dataset. One area where ultra-deep sequencing and maximizing the usage of all data was undeniably beneficial was in the generation of metagenome-assembled genomes.
Publisher
Oxford University Press (OUP)
Subject
Molecular Biology,Information Systems
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献