Abstract
AbstractA long tradition in philosophy and economics equates intelligence with the ability to act rationally—that is, to choose actions that can be expected to achieve one’s objectives. This framework is so pervasive within AI that it would be reasonable to call it the standard model. A great deal of progress on reasoning, planning, and decision-making, as well as perception and learning, has occurred within the standard model. Unfortunately, the standard model is unworkable as a foundation for further progress because it is seldom possible to specify objectives completely and correctly in the real world. The chapter proposes a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behavior that are more robust, controllable, and deferential to humans.
Funder
Verein zur Förderung des digitalen Humanismus
Publisher
Springer International Publishing
Reference23 articles.
1. Aristotle (n.d.). Nicomachean Ethics, Book III, 3, 1112b.
2. Arnauld, A. (1662). La logique, ou l’art de penser. Paris: Chez Charles Savreux.
3. Bernoulli, D. (1738). Specimen theoriae novae de mensura sortis. Proceedings of the St. Petersburg Imperial Academy of Sciences, 5, 175–92.
4. Chan, L., Hadfield-Menell, D., Srinivasa, S., & Dragan, A. (2019). The assistive multi-armed bandit. In Proc. Fourteenth ACM/IEEE International Conference on Human–Robot Interaction.
5. De Montmort, P. R. (1713). Essay d’analyse sur les jeux de hazard, 2nd ed. Paris: Chez Jacques Quillau.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献