1. Center for AI Safety. Statement on AI Risk: AI experts and public figures express their concern about AI risk. https://www.safe.ai/statement-on-ai-risk. Accessed 30 May 2023
2. Future of Life Institute. Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 31 May 2023
3. Cave, S., ÓhÉigeartaigh, S.S.: Bridging near-and long-term concerns about AI. Nat Mach Intell 1(1), 5–6 (2019). https://doi.org/10.1038/s42256-018-0003-2
4. Wong, M. (ed): AI doomerism is a decoy. In: The Atlantic. (2023)
5. Goldman, S. (ed): AI experts challenge ‘doomer’ narrative, including ‘extinction risk’ claims. In: VentureBeat. (2023)