1. Angeli, G., Johnson, M., Manning, C.D.: Leveraging linguistic structure for open domain information extraction. In: Annual Meeting of the Association for Computational Linguistics (2015)
2. Brown, T.B., et al.: Language models are few-shot learners. arXiv abs/2005.14165 (2020)
3. Brożek, A.: The Structure of Natural Language Questions, pp. 129–169. Brill, Leiden, The Netherlands (2011)
4. Dai, Z., Li, L., Xu, W.: CFO: conditional focused neural question answering with large-scale knowledge bases. ArXiv abs/1606.01994 (2016)
5. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2–7, 2019, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)