Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable Hate Speech Detection
-
Published:2022-03-02
Issue:
Volume:
Page:
-
ISSN:1387-3326
-
Container-title:Information Systems Frontiers
-
language:en
-
Short-container-title:Inf Syst Front
Author:
Meske Christian,Bunde Enrico
Abstract
AbstractHate speech in social media is an increasing problem that can negatively affect individuals and society as a whole. Moderators on social media platforms need to be technologically supported to detect problematic content and react accordingly. In this article, we develop and discuss the design principles that are best suited for creating efficient user interfaces for decision support systems that use artificial intelligence (AI) to assist human moderators. We qualitatively and quantitatively evaluated various design options over three design cycles with a total of 641 participants. Besides measuring perceived ease of use, perceived usefulness, and intention to use, we also conducted an experiment to prove the significant influence of AI explainability on end users’ perceived cognitive efforts, perceived informativeness, mental model, and trustworthiness in AI. Finally, we tested the acquired design knowledge with software developers, who rated the reusability of the proposed design principles as high.
Funder
Ruhr-Universität Bochum
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Information Systems,Theoretical Computer Science,Software
Reference87 articles.
1. Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 2. Arapostathis, S. G. (2021). A Methodology for Automatix Acquisition of Flood-event Management Information From Social Media: The Flood in Messinia, South Greece, 2016. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10105-z 3. Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 4. Akata, Z., Balliet, D., Rijke, D., Dignum, F., Dignum, V., Fokkens, G. E., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., Jonker, H. H., Jonker, C., Monz, C., Oliehoek, M. N., Oliehoek, F., Pakken, H., Schlbach, S., van der Gaag, L., van Harmelen, F., … Wlling, M. (2020). A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence. Computer, 53(8), 18–28. https://doi.org/10.1109/MC/.2020.2996587 5. Ayo, F. E., Folorunso, O., Ibharalu, F. T., & Osinuga, I. A. (2020). Machine learning techniques for hate speech classification of twitter data: State-of-the-art, future challenges and research directions. Computer Science Review, 38, 1–34. https://doi.org/10.1016/j.cosrev.2020.100311
Cited by
27 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|