Abstract
AbstractNew methods for metagenomic binning are typically evaluated using benchmarking software, and become tuned to maximize whatever criterion is measured by the benchmark. Subtleties in benchmarking procedures can cause misleading evaluations, derailing method development. Differences between procedures used to evaluate binning tools make them hard to compare, which slows progress in the field. We introduce BinBencher, a free software suite for benchmarking, and show how BinBencher produces evaluations that are more biologically meaningful than alternative benchmarking approaches.
Publisher
Cold Spring Harbor Laboratory