Affiliation:
1. The Ohio State University
2. University of California, Merced
Abstract
To allow performance comparison across different systems, our community has developed multiple benchmarks, such as TPC-C and YCSB, which are widely used. However, despite such effort, interpreting and comparing performance numbers is still a challenging task, because one can tune benchmark parameters, system features, and hardware settings, which can lead to very different system behaviors. Such tuning creates a long-standing question of whether the conclusion of a work can hold under different settings.
This work tries to shed light on this question by reproducing 11 works evaluated under TPC-C and YCSB, measuring their performance under a wider range of settings, and investigating the reasons for the change of performance numbers. By doing so, this paper tries to motivate the discussion about whether and how we should address this problem. While this paper does not give a complete solution---this is beyond the scope of a single paper, it proposes concrete suggestions we can take to improve the state of the art.
Publisher
Association for Computing Machinery (ACM)
Subject
General Earth and Planetary Sciences,Water Science and Technology,Geography, Planning and Development
Reference94 articles.
1. ACM SIGMOD Reproducibility. https://reproducibility.sigmod.org/. ACM SIGMOD Reproducibility. https://reproducibility.sigmod.org/.
2. EuroSys Call for Artifacts. https://sysartifacts.github.io/eurosys2021/. EuroSys Call for Artifacts. https://sysartifacts.github.io/eurosys2021/.
3. OSDI Call for Artifacts. https://www.usenix.org/conference/osdi21/call-for-artifacts. OSDI Call for Artifacts. https://www.usenix.org/conference/osdi21/call-for-artifacts.
4. PVLDB Reproducibility. http://vldb.org/pvldb/reproducibility/. PVLDB Reproducibility. http://vldb.org/pvldb/reproducibility/.
5. SortBenchmark. http://sortbenchmark.org/. SortBenchmark. http://sortbenchmark.org/.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献