Abstract
Abstract
Purpose of Review
There is much debate in machine ethics about the most appropriate way to introduce ethical reasoning capabilities into robots and other intelligent autonomous machines (IAMs). The main problem is that hardwiring intelligent and cognitive robots with commands not to cause harm or damage is not consistent with the notions of autonomy and intelligence. Also, such hardwiring does not leave robots with any course of action if they encounter situations for which they are not programmed or where some harm is caused no matter what course of action is taken.
Recent Findings
Recent developments in intelligent autonomous vehicle standards have led to the identification of different levels of autonomy than can be usefully applied to different levels of cognitive robotics. In particular, the introduction of ethical reasoning capability can add levels of autonomy not previously envisaged but which may be necessary if fully autonomous robots are to be trustworthy. But research into how to give IAMs an ethical reasoning capability is a relatively under-explored area in artificial intelligence and robotics. This review covers previous research approaches involving case-based reasoning, artificial neural networks, constraint satisfaction, category theory, abductive logic, inductive logic, and fuzzy logic.
Summary
This paper reviews what is currently known about machine ethics and the way that cognitive robots as well as IAMs in general can be provided with an ethical reasoning capability. A new type of metric-based ethics appropriate for robots and IAMs may be required to replace our current concept of ethical reasoning being largely qualitative in nature.
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献