Abstract
AbstractWhile early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.
Funder
Jürgen Manchot Stiftung
Heinrich-Heine-Universität Düsseldorf
Publisher
Springer Science and Business Media LLC
Reference121 articles.
1. Aitamurto, T., & Landemore, H. (2013). Democratic Participation and Deliberation in Crowdsourced Legislative Processes: The Case of the Law on Off-Road Traffic in Finland. In The 6th Conference on Communities and Technologies (C&T), Workshop: Large-Scale Idea Management and Deliberation Systems.
2. Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2014). The “Nasty Effect:” Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19(3), 373–387. https://doi.org/10.1111/jcc4.12009
3. Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic Talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156–168. https://doi.org/10.1093/ijpor/edw022
4. Argyle, L. P., Bail, C. A., Busby, E. C., Gubler, J. R., Howe, T., Rytting, C., Sorensen, T., & Wingate, D. (2023). Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences of the United States of America. https://doi.org/10.1073/pnas.2311627120
5. Arora, I., Guo, J., Levitan, S. I., McGregor, S., & Hirschberg, J. (2020). A novel methodology for developing automatic harassment classifiers for Twitter. In S. Akiwowo, B. Vidgen, V. Prabhakaran, & Z. Waseem (Eds.), Proceedings of the fourth workshop on online abuse and harms (pp. 7–15). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.alw-1.2