Abstract
Audit experiments are used to measure discrimination in a large number of domains (Employment: Bertrand et al. (2004); Legislator responsiveness: Butler et al. (2011); Housing: Fang et al. (2018)). Audit studies all have in common that they estimate the average difference in response rates depending on randomly varied characteristics (such as the race or gender) of a requester. Scholars conducting audit experiments often seek to extend their analyses beyond the effect on response to the effects on the quality of the response. Response is a consequence of treatment; answering these important questions well is complicated by post-treatment bias (Montgomery et al., 2018). In this note, I consider a common form of post-treatment bias that occurs in audit experiments.
Publisher
Cambridge University Press (CUP)
Subject
Sociology and Political Science
Reference14 articles.
1. Can the Government Deter Discrimination? Evidence from a Randomized Intervention in New York City;Fang;Journal of Politics,2018
2. A Note on Dropping Experimental Subjects who Fail a Manipulation Check;Aronow;Political Analysis,2018
3. How Responsive are Political Elites? A Meta-Analysis of Experiments on Public Officials
4. Power to the People: Evidence from a Randomized Field Experiment on Community-Based Monitoring in Uganda*
Cited by
65 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献