Adjusting for publication bias is essential when drawing meta-analytic inferences. However,most methods that adjust for publication bias are sensitive to the particular researchconditions, such as the degree of heterogeneity in effect sizes across studies. Sladekovaet al. (2022) tried to circumvent this complication by selecting the methods that are mostappropriate for a given set of conditions, and concluded that publication bias on averagecauses only minimal over-estimation of effect sizes in psychology. However, this approachsuffers from a “catch-22” problem — to know the underlying research conditions, one needsto have adjusted for publication bias correctly, but to correctly adjust for publication bias,one needs to know the underlying research conditions. To alleviate this problem weconduct an alternative analysis, Robust Bayesian meta-analysis (RoBMA), which is notbased on model-selection but on model-averaging. In RoBMA, models that predict theobserved results better are given correspondingly higher weights. A RoBMA reanalysis ofSladekova et al.’s data reveals that more than 60% of meta-analyses in psychology notablyoverestimate the evidence for the presence of the meta-analytic effect and more than 50%overestimate its magnitude. Our results highlight the need for robust bias correction whenconducting meta-analyses and for the adoption of publishing formats such as RegisteredReports that are less prone to publication bias.