1. Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems. vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
2. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017). https://arxiv.org/abs/1702.08608
3. Goebel, R., Kano, Y., Kim, M.Y., Rabelo, J., Satoh, K., Yoshioka, M.: Summary of the competition on legal information, extraction/entailment (COLIEE) 2023. In: Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, pp. 472–480 (2023)
4. Hoshino, R., Kiyota, N., Kano, Y.: Question answering system for legal bar examination using predicate argument structures focusing on exceptions. In: Proceedings of the Sixth International Competition on Legal Information Extraction/Entailment (COLIEE 2019), pp. 38–42 (2019)
5. Hu, E.J., et al.: LoRA: low-rank adaptation of large language models. In: International Conference on Learning Representations (2022). https://openreview.net/forum?id=nZeVKeeFYf9