Abstract
AbstractThis paper explores the potential of a multidisciplinary approach to testing and aligning artificial intelligence (AI), specifically focusing on large language models (LLMs). Due to the rapid development and wide application of LLMs, challenges such as ethical alignment, controllability, and predictability of these models emerged as global risks. This study investigates an innovative simulation-based multi-agent system within a virtual reality framework that replicates the real-world environment. The framework is populated by automated 'digital citizens,' simulating complex social structures and interactions to examine and optimize AI. Application of various theories from the fields of sociology, social psychology, computer science, physics, biology, and economics demonstrates the possibility of a more human-aligned and socially responsible AI. The purpose of such a digital environment is to provide a dynamic platform where advanced AI agents can interact and make independent decisions, thereby mimicking realistic scenarios. The actors in this digital city, operated by the LLMs, serve as the primary agents, exhibiting high degrees of autonomy. While this approach shows immense potential, there are notable challenges and limitations, most significantly the unpredictable nature of real-world social dynamics. This research endeavors to contribute to the development and refinement of AI, emphasizing the integration of social, ethical, and theoretical dimensions for future research.
Publisher
Springer Science and Business Media LLC
Reference128 articles.
1. Aher G, Arriaga RI and Kalai AT (2023) Using large language models to simulate multiple humans and replicate human subject studies. arXiv. http://arxiv.org/abs/2208.10264
2. AkshitIreddy (2023) Interactive LLM Powered NPCs. GitHub. https://github.com/AkshitIreddy/Interactive-LLM-Powered-NPCs
3. Altman S (2023) Planning for AGI and beyond. OpenAI Blog. https://openai.com/blog/planning-for-agi-and-beyond
4. Amodei D, Olah C. Steinhardt J, Christiano P, Schulman J, Mané D (2016). Concrete problems in AI safety. arXiv. https://doi.org/10.48550/arXiv.1606.06565
5. Armstrong S, Sotala K, Óhéigeartaigh SS (2012) The errors, insights and lessons of famous AI predictions – and what they mean for the future. J Exper Theor Artif Intell 26(3):317–342