Misinformation online poses a range of threats, from subverting democratic processes to undermining public health measures. Proposed solutions range from encouraging more selective sharing by individuals, to platform removal of false content and accounts that create or promote it. How, whether, and which strategies to implement depends on their relative and combined ability to reduce viral misinformation spread at practical levels of enforcement. Here we provide a framework to evaluate interventions aimed at reducing viral misinformation online both in isolation and when used in combination. We begin by deriving a generative model of viral misinformation spread, inspired by research on infectious disease. Applying this model to a large corpus of misinformation events that occurred during the 2020 US election, we reveal that commonly proposed interventions--including removal of content, virality circuit breakers, nudges, and account banning---are unlikely to be effective in isolation without extreme censorship. However, our framework demonstrates that a combined approach can achieve a substantial, ~50%, reduction in the prevalence of misinformation. Our results challenge claims that combating misinformation will require new ideas or high costs to user expression. Instead, we highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity, and democratic processes around the globe.