Abstract
AbstractHigh-throughput genotyping of large numbers of lines remains a key challenge in plant genetics, requiring geneticists and breeders to find a balance between data quality and the number of genotyped lines under a variety of different existing technologies when resources are limited. In this work, we are proposing a new imputation pipeline (“HBimpute”) that can be used to generate high-quality genomic data from low read-depth whole-genome-sequence data. The key idea of the pipeline is the use of haplotype blocks from the software HaploBlocker to identify locally similar lines and merge their reads locally. The effectiveness of the pipeline is showcased on a dataset of 321 doubled haploid lines of a European maize landrace, which were sequenced with 0.5X read-depth. Overall imputing error rates are cut in half compared to the state-of-the-art software BEAGLE, while the average read-depth is increased to 83X, thus enabling the calling of structural variation. The usefulness of the obtained imputed data panel is further evaluated by comparing the performance in common breeding applications to that of genomic data from a 600k array. In particular for genome-wide association studies, the sequence data is shown to be performing slightly better. Furthermore, genomic prediction based on the overlapping markers from the array and sequence is leading to a slightly higher predictive ability for the imputed sequence data, thereby indicating that the data quality obtained from low read-depth sequencing is on par or even slightly higher than high-density array data. When including all markers for the sequence data, the predictive ability is slightly reduced indicating overall lower data quality in non-array markers.Author summaryHigh-throughput genotyping of large numbers of lines remains a key challenge in plant genetics and breeding. Cost, precision, and throughput must be balanced to achieve optimal efficiencies given available technologies and finite resources. Although genotyping arrays are still considered the gold standard in high-throughput quantitative genetics, recent advances in sequencing provide new opportunities for this. Both the quality and cost of genomic data generated based on sequencing are highly dependent on the used read depth. In this work, we are proposing a new imputation pipeline (“HBimpute”) that uses haplotype blocks to detect individuals of the same genetic origin and subsequently uses all reads of those individuals in the variant calling. Thus, the obtained virtual read depth is artificially increased, leading to higher calling accuracy, coverage, and the ability to all copy number variation based on relatively cheap low-read depth sequencing data. Thus, our approach makes sequencing a cost-competitive alternative to genotyping arrays with the additional benefit of the potential use of structural variation.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献