Abstract
AbstractOrganisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems proposed in previous literature use one of three mental models: the Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics; the Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose; and the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, input data, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the vocabulary needed to demarcate the material scope of their AI governance frameworks.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Philosophy
Reference142 articles.
1. AI HLEG. (2019). European Commission’s ethics guidelines for trustworthy artificial intelligence (Issue May). Retrieved from https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1.
2. AI HLEG. (2020). Assessment list for trustworthy AI (ALTAI). Retrieved from https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.
3. AIEIG. (2020). From principles to practice—an interdisciplinary framework to operationalise AI ethics. In AI Ethics Impact Group, VDE association for electrical electronic & information technologies e.V. (pp. 1–56). Bertelsmann Stiftung. https://doi.org/10.11586/2020013
4. Aiken, C. (2021). Classifying AI systems CSET data brief. Retrieved from https://cset.georgetown.edu/publication/classifying-ai-systems/.
5. AlgorithmWatch. (2019). Automating society: Taking stock of automated decision-making in the EU. Bertelsmann Stiftung (pp. 73–83). Retrieved from https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献