Abstract
AbstractIt is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the paper contends that AI systems are better conceptualised instead as situational characters whose actions remain constrained by their programming. Properly viewing AI systems as such illuminates how existing legal doctrines could be sensibly applied to AI and reinforces emerging calls for placing greater scrutiny on the broader AI ecosystem.
Publisher
Cambridge University Press (CUP)
Reference36 articles.
1. Artificial intelligence and legal disruption: a new model for analysis
2. Towards a control-centric account of tort liability for automated vehicles;Soh;Torts Law Journal,2021
3. Punishing artificial intelligence: legal fiction or science fiction;Abbott;UC Davis Law Review,2019
4. Autonomous Vehicles and the Law
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Artificial Intelligence Law for Malaysia;2023 International Conference for Technological Engineering and its Applications in Sustainable Development (ICTEASD);2023-11-14