The optimal way to make decisions in many circumstances is to track the difference in evidence collected in favour of the options. The drift diffusion model (DDM) implements this approach, and provides an excellent account of decisions and response times. However, the DDM struggles to account for confidence reports, because responses are triggered when the difference in evidence reaches a set value, suggesting confidence in all decisions should be equal. Many theories of confidence have therefore used alternative, non-optimal models of decisions. Motivated by the historical success of the DDM, we consider simple extensions to this framework that might allow it to account for confidence. Motivated by the idea that the brain will not duplicate representations of evidence, in all model variants decisions and confidence are based on the same evidence accumulation process. We compare the models to benchmark results, and successfully apply 4 qualitative tests concerning the relationships between confidence, evidence, and time, in a new preregistered study. Using computationally cheap expressions to model confidence on a trial-by-trial basis, we find that a subset of model variants also provide an excellent account of the precise quantitative effects observed in confidence data. Specifically, our results favour the hypothesis that confidence reflects the strength of accumulated evidence penalised by the time taken to reach the decision (Bayesian readout), with the penalty applied not perfectly calibrated to the specific task context. These results suggest there is no need to abandon the DDM to successfully account for confidence reports.