Affiliation:
1. Tokyo Institute of Technology
Publisher
Association for Natural Language Processing
Reference38 articles.
1. Alt, C., Gabryszak, A., and Hennig, L. (2020). “TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1558–1569. Association for Computational Linguistics.
2. Chen, H., Chen, B., and Zhou, X. (2023). “Did the Models Understand Documents? Benchmarking Models for Language Understanding in Document-Level Relation Extraction.” In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6418–6435. Association for Computational Linguistics.
3. Cheng, Q., Liu, J., Qu, X., Zhao, J., Liang, J., Wang, Z., Huai, B., Yuan, N. J., and Xiao, Y. (2021). “HacRED: A Large-Scale Relation Extraction Dataset Toward Hard Cases in Practical Applications.” In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 2819–2831. Association for Computational Linguistics.
4. Christopoulou, F., Miwa, M., and Ananiadou, S. (2019). “Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4925–4936. Association for Computational Linguistics.
5. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics.