1. Blair, D. C. and Maron, M. E. (1985). “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System.” Communication of the Association for Computing Machinery, 28 (3), pp. 289–299.
2. Cheng, J. and Lapata, M. (2016). “Neural Summarization by Extracting Sentences and Words.” In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 484–494, Berlin, Germany. Association for Computational Linguistics.
3. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL2019), pp. 4171–4186, Minnesota, USA. Association for Computational Linguistics.
4. Ishigaki, T., Kamigaito, H., Takamura, H., and Okumura, M. (2019). “Discourse-Aware Hierarchical Attention Network for Extractive Single-Document Summarization.” In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pp. 497–506, Varna, Bulgaria. INCOMA Ltd.
5. Ishigaki, T., Nishino, S., Washino, S., Igarashi, H., Nagai, Y., Washida, Y., and Murai, A. (2022). “Automating Horizon Scanning in Future Studies.” In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 319–327, Marseille, France. European Language Resources Association.