Affiliation:
1. Professor of Research Design and Statistics (retired) Internet Society for Sport Science Auckland New Zealand
2. Professor of Nutrition, Metabolism, and Exercise Massey University Auckland New Zealand
Abstract
AbstractMeta‐analysts often use standardized mean differences (SMD) to combine mean effects from studies in which the dependent variable has been measured with different instruments or scales. In this tutorial we show how the SMD is properly calculated as the difference in means divided by a between‐subject reference‐group, control‐group, or pooled pre‐intervention SD, usually free of measurement error. When combining mean effects from controlled trials and crossovers, most meta‐analysts have divided by either the pooled SD of change scores, the pooled SD of post‐intervention scores, or the pooled SD of pre‐ and post‐intervention scores, resulting in SMDs that are biased and difficult to interpret. The frequent use of such inappropriate standardizing SDs by meta‐analysts in three medical journals we surveyed is due to misleading advice in peer‐reviewed publications and meta‐analysis packages. Even with an appropriate standardizing SD, meta‐analysis of SMDs increases heterogeneity artifactually via differences in the standardizing SD between settings. Furthermore, the usual magnitude thresholds for standardized mean effects are not thresholds for clinically important differences. We therefore explain how to use other approaches to combining mean effects of disparate measures: log transformation of factor effects (response ratios) and of percent effects converted to factors; rescaling of psychometrics to percent of maximum range; and rescaling with minimum clinically important differences. In the absence of clinically important differences, we explain how standardization after meta‐analysis with appropriately transformed or rescaled pre‐intervention SDs can be used to assess magnitudes of a meta‐analyzed mean effect in different settings.