Abstract
AbstractThe potential of artificial intelligence (AI) has grown exponentially in recent years, which not only generates value but also creates risks. AI systems are characterised by their complexity, opacity and autonomy in operation. Now and in the foreseeable future, AI systems will be operating in a manner that is not fully autonomous. This signifies that providing appropriate incentives to the human parties involved is still of great importance in reducing AI-related harm. Therefore, liability rules should be adapted in such a way to provide the relevant parties with incentives to efficiently reduce the social costs of potential accidents. Relying on a law and economics approach, we address the theoretical question of what kind of liability rules should be applied to different parties along the value chain related to AI. In addition, we critically analyse the ongoing policy debates in the European Union, discussing the risk that European policymakers will fail to determine efficient liability rules with regard to different stakeholders.
Publisher
Cambridge University Press (CUP)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Product liability for defective AI;European Journal of Law and Economics;2024-02-27
2. Self-regulated and Participatory Automatic Text Simplification;Communications in Computer and Information Science;2024