Abstract
AbstractRisks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a risk-based tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis and reduction. We show how a multi-component analysis of risk, that distinguishes between the dimensions of hazard, exposure, and vulnerability, allows us to better understand the sources of AI-related risks and effectively intervene to mitigate them. This multi-component analysis also turns out to be particularly useful in the case of general-purpose and experimental AI systems, for which it is often hard to perform both ex-ante and ex-post risk analyses.
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: Ethical and legal issues. Current Robotics Reports, 1, 187–194. https://doi.org/10.1007/s43154-020-00024-3.
2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Pro Publica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
3. Bagdasaryan, E., & Shmatikov, V. (2022). Spinning language models: Risks of propaganda-as-a-service and countermeasures. 2022 IEEE Symposium on Security and Privacy (SP), San Francisco (CA), 769–786, https://doi.org/10.1109/SP46214.2022.9833572.
4. Boholm, M., Möller, N., & Hansson, S. O. (2016). The concepts of risk, safety, and security application in everyday language. Risk Analysis, 36(2), 320–338. https://doi.org/10.1111/risa.12464.
5. Bommasani, R., Hudson, D. A., Adeli, E., et al. (2022). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.