Abstract
AbstractThe potential benefits and risks of artificial intelligence technologies have sparked a wide-ranging debate in both academic and public circles. On one hand, there is an urgent call to address the immediate and avoidable challenges associated with these tools, such as accountability, privacy, bias, understandability, and transparency; on the other hand, prominent figures like Geoffrey Hinton and Elon Musk have voiced concerns over the potential rise of Super Artificial Intelligence, whose singularity could pose an existential threat to humanity. Coordinating the efforts of thousands of decentralized entities to prevent such a hypothetical event may seem insurmountable in our intricate and multipolar world. Thus, drawing from both perspectives, this work suggests employing the tools and framework of Stoic philosophy, particularly the concept of the dichotomy of control—focusing on what is within our power. This Stoic principle offers a practical and epistemological approach to managing the complexities of AI, and it encourages individuals to organize their efforts around what they can influence while adapting to the constraints of external factors. Within this framework, the essay found that Stoic wisdom is essential for assessing risks, courage is necessary to face contemporary challenges, and temperance and tranquility are indispensable; and these lessons can inform ongoing public and academic discourse, aiding in the development of more effective policy proposals for aligning Narrow AI and General AI with human values.
Funder
Universidad Autonoma Metropolitana
Publisher
Springer Science and Business Media LLC
Reference59 articles.
1. AI Act.: Shaping Europe’s digital future (2024). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. Accessed July 2024
2. Andrade, R.: Problemas filosóficos de la inteligencia artificial general: ontología, conflictos ético-políticos y astrobiología. Argumentos de razón técnica 26, 275–302 (2023). https://doi.org/10.12795/Argumentos/2023.i26.10
3. Aljaber, S., et al.: International journal of engineering research and applications.** *International journal of engineering research and applications*, *12*(12), 52–57 (2022). Retrieved from https://www.ijera.com/papers/vol12no12/G12125257.pdf. Accessed July 2024
4. Aničin, L., Stojmenović, M.: Bias analysis in stable diffusion and midjourney models. In: Lecture notes of the institute for computer sciences, social informatics and telecommunications engineering, pp. 378–388. Springer Nature, Switzerland (2023). https://doi.org/10.1007/978-3-031-35081-8_32
5. Blackman, R.: Ethical machines: Your concise guide to totally unbiased, transparent, and respectful AI. Harvard Business Press, Boston (2022)