1. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence);SA Crossley,2019
2. Botarleanu, R.M., Dascalu, M., Allen, L.K., Crossley, S.A., McNamara, D.S.: Multitask summary scoring with longformers. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds.) AIED 2022. LNCS, vol. 13355, pp. 756–761. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11644-5_79
3. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv:1907.11692 [Cs]. http://arxiv.org/abs/1907.1169 (2019)
4. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The Long-Document Transformer arXiv:2004.05150. arXiv. https://doi.org/10.48550/arXiv.2004.05150 (2020)
5. Crossley, S.A., Heintz, A., Choi, J., Batchelor, J., Karimi, M., Malatinszky, A.: The CommonLit ease of readability (CLEAR) corpus. In: Educational Data Mining (2021)