Misinformation on social media has become a major focus of research and concern in recent years. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possibly keep up with the volume of misinformation produced every day. Furthermore, not everyone trusts fact-checkers, with some arguing that they have a liberal bias. Here, we explore a potential solution to both of these problems: leveraging the “wisdom of crowds'' to identify misinformation at scale using politically-balanced groups of laypeople. Using a set of 207 news articles flagged for fact-checking by an internal Facebook algorithm, we compare the accuracy ratings given by (i) three professional fact-checkers after researching each article and (ii) 1,128 Americans from Amazon Mechanical Turk after simply reading the headline and lede sentence. We find that the average rating of a small politically-balanced crowd of laypeople is as correlated with the average fact-checker rating as the fact-checkers’ ratings are correlated with each other. Furthermore, the layperson ratings can predict whether the majority of fact-checkers rated a headline as “true” with high accuracy, particularly for headlines where all three fact-checkers agree. We also find that layperson cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checker ratings; and that informing laypeople of each headline’s publisher leads to a small increase in agreement with fact-checkers. Our results indicate that crowdsourcing is a promising approach for helping to identify misinformation at scale.