Author:
Rimkus Lukas,Verbickas Jonas,Batista-Navarro Riza
Publisher
Springer Nature Switzerland
Reference17 articles.
1. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150 [cs] (2020)
2. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996). https://doi.org/10.1007/BF00058655
3. Cortes, C., Lawrence, N.D.: Inconsistency in conference peer review: revisiting the 2014 NeurIPS experiment. arXiv preprint arXiv:2109.09774 [cs] (2021)
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis (2019). https://doi.org/10.18653/v1/N19-1423
5. Kang, D., et al.: A dataset of peer reviews (PeerRead): collection, insights and NLP applications. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1647–1661. Association for Computational Linguistics, New Orleans (2018).https://doi.org/10.18653/v1/N18-1149