Affiliation:
1. Future of Humanity Institute, UK
2. JB Speed School of Engineering, USA
Abstract
Superintelligent systems are likely to present serious safety issues, since such entities would have great power to control the future according to their possibly misaligned goals or motivation systems. Oracle AIs (OAI) are confined AIs that can only answer questions and do not act in the world, represent one particular solution to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper begins with the definition of the OAI Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay negative effects from a potentially unfriendly superintelligence while allowing for future research and development of superintelligent systems.
Reference78 articles.
1. Why Machine Ethics?
2. Machine Ethics: Creating an Ethical Intelligent Agent.;M.Anderson;AI Magazine,2007
3. Armstrong, S. (2010). Utility Indifference. Technical Report 2010-1, Future of Humanity Institute, Oxford University.
4. Armstrong, S. (2010). The AI in a Box Boxes You. Paper presented at the Less Wrong. Available at: http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/
5. Risks and Mitigation Strategies for Oracle AI
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A General Paradigm of Knowledge-driven and Data-driven Fusion;2023 15th International Conference on Advanced Computational Intelligence (ICACI);2023-05-06