1. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Advances in neural information processing systems, pp 9505–9515
2. Amann J, Blasimme A, Vayena E, Frey D, Madai V (2020) Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inf Dec Making 20. https://doi.org/10.1186/s12911-020-01332-6
3. Braşoveanu AMP, Andonie R (2020) Visualizing transformers for NLP: a brief survey. In: 2020 24th international conference information visualisation (IV), pp 270–279. https://doi.org/10.1109/IV51561.2020.00051
4. Braşoveanu AMP, Andonie R (2022) Visualizing and explaining language models. In: Kovalerchuk B, Nazemi K, Andonie R, Datia N, Banissi E (eds) Integrating artificial intelligence and visualization for visual knowledge discovery. Springer International Publishing, Cham, pp 213–237. https://doi.org/10.1007/978-3-030-93119-3_8
5. Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. https://github.com/openai/gpt-3