1. Barr, K. (2022). AI Image Generators Routinely Display Gender and Cultural Bias. Gizmodo. https://gizmodo.com/ai-dall-e-stability-ai-stable-diffusion-1849728302 (accessed 14 March 2023).
2. Bender, E.M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2021, pp. 610–623. FAccT ’21. Association for Computing Machinery. Available at: https://doi.org/10.1145/3442188.3445922.
3. Bommasani, R., Hudson, D.A., Adeli, E., et al. (2022). On the Opportunities and Risks of Foundation Models (arXiv:2108.07258). arXiv. Available at: https://doi.org/10.48550/arXiv.2108.07258 (accessed 7 March 2023).
4. The value of AI guidance in human examination of synthetically-generated faces;Boyd;Proceedings of the AAAI Conference on Artificial Intelligence,2023
5. Language models are few-shot learners;Brown;Advances in Neural Information Processing Systems,2020