1. Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., and Carter, S. (2023, November 01). Zoom In: An Introduction to Circuits. Distill 2020. Available online: https://distill.pub/2020/circuits/zoom-in.
2. Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., Mann, B., Askell, A., Bai, Y., and Chen, A. (2023, November 01). In-Context Learning and Induction Heads. Transform. Circuits Thread 2022. Available online: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
3. Michaud, E.J., Liu, Z., Girit, U., and Tegmark, M. (2023). The Quantization Model of Neural Scaling. arXiv.
4. Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., and Conerly, T. (2023, November 01). A Mathematical Framework for Transformer Circuits. Transform. Circuits Thread 2021. Available online: https://transformer-circuits.pub/2021/framework/index.html.
5. Wang, K.R., Variengien, A., Conmy, A., Shlegeris, B., and Steinhardt, J. (2023, January 1–5). Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small. Proceedings of the The Eleventh International Conference on Learning Representations, Kigali, Rwanda.