Abstract
Concerns about the ethics of the use of artificial intelligence by militaries have insufficiently addressed the differences between the methods (algorithms) that such software provides. These methods are discussed and key differences are identified that affect their ethical military use, most notably for lethal autonomous systems. Possible mitigations of ethical problems are discussed such as sharing decision-making with humans, better testing of the software, providing explanations of what is being done, looking for biases, and putting explicit ethics into the software. The best mitigation in many cases is explaining reasoning and calculations to aid transparency.
Subject
Artificial Intelligence,Information Systems,Computer Science (miscellaneous)
Reference12 articles.
1. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability;Ananny;New Media Soc.,2016
2. Explanation in AI and law: past, present, and future;Atkinson;Artif. Intell.,2020
3. The human role in autonomous weapons design and deployment,;Cummings,2021
4. Normative epistemology for lethal autonomous weapons systems,;Devitt,2021
5. Algorithms, AI, and the ethics of war;Emery;Peace Rev,2021
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Thou Shall Not Kill;Advances in Human Services and Public Health;2024-03-06
2. AI in Defence and Ethical Concerns;2024 Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE);2024-02-22
3. Explainable AI in Military Training Applications;Advances in Computational Intelligence and Robotics;2024-01-18
4. The Need for Explainable AI in Industry 5.0;Advances in Computational Intelligence and Robotics;2024-01-18