1. Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, pp. 1877–1901. Curran Associates, Inc. (2020)
2. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies, pp. 4171–4186. Association for Computational Linguistics (2019)
3. Hristova, B.: Some students are using ChatGPT to cheat—here’s how schools are trying to stop it (2023). https://www.cbc.ca/news/canada/hamilton/chatgpt-school-cheating-1.6734580
4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big?
In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. Association for Computing Machinery, New York (2021)
5. Ericson, C.: Hazard Analysis Techniques for System Safety. Wiley, Hoboken (2005)