Abstract
For decades, computer benchmarkers have fought a War of Means. Although many have raised concerns with the geometric mean (GM), it continues to be used by SPEC and others. This war is an unnecessarymisunderstanding due to inadequately articulated implicit assumptions, plus confusio namong populations, their parameters, sampling methods, and sample statistics. In fact, all the Means have their uses, sometimes in combination. Metrics may be
algebraically
correct, but
statistically
irrelevant or misleading if applied to population distributions for which they are inappropriate. Normal (Gaussian) distributions are so useful that they are often assumed without question,but many important distributions are not normal.They require different analyses, most commonly by finding a mathematical transformations that yields a normal distribution,computing the metrics, and then back-transforming to the original scale. Consider the distribution of relative performance ratios of programs on two computers. The normal distribution is a good fit
only
when variance and skew are small, but otherwise generates logical impossibilities and misleading statistical measures. A much better choice is the lognormal (or log-normal) distribution, not just on theoretical grounds, but through the (necessary) validation with real data. Normal and lognormal distributions are similar for low variance and skew, but the lognormal handles skewed distributions reasonably, unlike the normal. Lognormal distributions occur frequently elsewhere are well-understood, and have standard methods of analysis.Everyone agrees that "Performance is not a single number," ... and then argues about which number is better. It is more important to understanding populations, appropriate methods, and proper ways to convey uncertainty. When population parameters are estimated via samples,
statistically
correct methods must be used to produce the appropriate means, measures of dispersion, Skew, confidence levels, and perhaps goodness-of-fit estimators. If the wrong Mean is chosen, it is difficult to achieve much. The GM predicts the mean relative performance of
programs
, not of workloads. The usual GM formula is rather unintuitive, and is often claimed to have no physical meaning. However,
it is the back-transformed average of a lognormal distribution
, as can be seen by the mathematical identity below. Its use is not onlystatistically appropriate in some cases, but enables straightforward computation of other useful statistics.<display equation>"If a man will begin in certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end with certainties." — Francis Bacon, in Savage.
Publisher
Association for Computing Machinery (ACM)
Reference29 articles.
1. Savage S "Some Gratuitous Inflammatory Remarks on the Accounting Industry " http://www.stanford.edu/dept/MSandE/faculty/savage/AccountingRemarks.pdf Savage S "Some Gratuitous Inflammatory Remarks on the Accounting Industry " http://www.stanford.edu/dept/MSandE/faculty/savage/AccountingRemarks.pdf
2. How not to lie with statistics: the correct way to summarize benchmark results
3. Characterizing computer performance with a single number
4. More on finding a single number to indicate overall performance of a benchmark suite
5. Jain R. The Art of Computer Systems Performance Analysis John Wiley and Sons New York 1991. Jain R. The Art of Computer Systems Performance Analysis John Wiley and Sons New York 1991.
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献