Affiliation:
1. University of Minnesota - Twin Cities, USA
Abstract
Political scientists interested in studying the political implications of citizens’ cognitive abilities often turn to readily available “intelligence ratings” in the American National Election Studies (ANES). These ratings are generally thought to represent respondents’ cognitive abilities, albeit imperfectly. I hypothesize that these ratings do not reflect more-objective tests of cognitive ability, and instead better capture considerations related to the political subject of the interviews and other contextual factors. In the 2012 ANES, which contained a cognitive ability test (“Wordsum”), I find that political engagement and demographic factors but not actual measurements of cognitive ability are associated with interviewer intelligence rating scores. In bivariate analyses, verbal intelligence and interviewer ratings are moderately correlated with one another, consistent with conventional wisdom. But this correlation is a spurious one, as the same is also true for political engagement, education, and household income. Further, in multivariate models, Wordsum scores were neither statistically nor substantively predictive of interviewer ratings. The results suggest that interviewer ratings are better understood as proxies for political engagement, not cognitive ability. I conclude by arguing that the growing importance of studying cognitive ability in political science makes it necessary to include more-objective verbal intelligence tests more frequently in public opinion surveys.
Subject
Political Science and International Relations,Public Administration,Sociology and Political Science