Abstract
AbstractScientific evidence has become increasingly important for the decision-making processes in contemporary democracies. On the one hand, research dealing with the utilization of scientific knowledge in the political process has pointed out that decision-makers learn from evidence to improve policies to solve problems. On the other, scholars have underlined that actors learn from evidence to support their political interests regardless of how it affects the policy problem. One conventional insight from the policy learning literature is that higher salience of a policy issue makes it much less likely that decision-makers use evidence in an “unpolitical” way. Nevertheless, only few studies have investigated systematically how differences regarding issue salience between policy fields impact on how decision-makers learn from evaluations at the individual level. Using multilevel regression models on data from a legislative survey in Switzerland, this paper shows that salience and technical complexity of policy issues do not automatically lead to less policy learning and to more political learning from policy evaluations. Nevertheless, this article’s empirical analysis also points out that issue salience increases policy learning from evaluations if the policy issue is technically complex. Our findings contribute to research on policy learning and evidence-based policy making by linking the literature on policy evaluation and learning, which helps analyzing the micro-foundations of learning in public policy and administration.
Publisher
Springer Science and Business Media LLC
Subject
Management, Monitoring, Policy and Law,Public Administration,General Social Sciences,Sociology and Political Science,Development
Reference102 articles.
1. Alkin, M. C., & King, J. A. (2017). Definitions of evaluation use and misuse, evaluation influence, and factors affecting use. American Journal of Evaluation, 38(3), 434–450.
2. Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568–579.
3. Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29(1), 1–12.
4. Amara, N., Ouimet, M., & Landry, R. (2004). New evidence on instrumental, conceptual, and symbolic utilization of university research in government agencies. Science Communication, 26(1), 75–106.
5. Ansell, C. (2011). Pragmatist governance: Re-imagining institutions and democracy. Oxford University Press.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献