Publisher
Springer Nature Switzerland
Reference20 articles.
1. Bock, R.D.: Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika 37, 29–51 (1972)
2. Chan, Y.H., Fan, Y.C.: A recurrent BERT-based model for question generation. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 154–162 (2019)
3. Chiang, W.L., et al.: Chatbot arena: an open platform for evaluating LLMs by human preference. arXiv (2024)
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186 (2019)
5. Du, X., Shao, J., Cardie, C.: Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1342–1352 (2017)