1. Callan, J., Hoy, M., Yoo, C., Zhao, L.: Clueweb09 data set (2009), https://lemurproject.org/clueweb09/ Accessed 28 Apr 2023
2. Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Voorhees, E.M.: Overview of the TREC 2019 Deep Learning Track, https://arxiv.org/abs/2003.07820 Accessed 28 Apr 2023
3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol 1. pp. 4171–4186 (2019)
4. Dhingra, B., Mazaitis, K., Cohen, W.W.: Quasar: Datasets for question answering by search and reading (2017), https://arxiv.org/abs/1707.03904
5. Dong, Q., et al.: Incorporating explicit knowledge in pre-trained language models for passage re-ranking. In: Proceedings of the 45th International ACM SIGIR Conference, pp. 1490–1501. SIGIR ’22, ACM, New York, NY, USA (2022). https://doi.org/10.1145/3477495.3531997