1. Brown, P. (1999). “Repetition.” Journal of Linguistic Anthropology, 9 (1/2), pp. 223–226.
2. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
3. Johnstone, B. (2002). Discourse analysis. John Wiley & Sons.
4. 川本稔己,長谷川駿,上垣外英剛,船越孝太郎,奥村学 (2021). 傾聴の応答で繰り返される語句の検出性能の向上. 言語処理学会第 27 回年次大会発表論文集, pp. 1580–1584. [T. Kawamoto et al. (2021). Keicho no Oto de Kurikaesareru Goku no Kenshutsu Seino no Kojo. Proceedings of the 27th Annual Meeting of the Association for Natural Language Processing, pp. 1580–1584.].
5. Kawamoto, T., Kamigaito, H., Funakoshi, K., and Okumura, M. (2022). “Generating Repetitions with Appropriate Repeated Words.” In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 852–859, Seattle, United States. Association for Computational Linguistics.