1. Pre-trained BERT model. https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad (2018). Accessed 20 Jan 2021
2. Abdelhaq, H., Sengstock, C., Gertz, M.: Eventweet: online localized event detection from twitter. Proc. VLDB Endow. 6(12), 1326–1329 (2013)
3. Christensen, J., Soderland, S., Etzioni, O.: An analysis of open information extraction based on semantic role labeling. In: Proceedings of the Sixth International Conference on Knowledge Capture, pp. 113–120 (2011)
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://www.aclweb.org/anthology/N19-1423
5. Etzioni, O., Fader, A., Christensen, J., Soderland, S., Mausam, M.: Open information extraction: the second generation. IJCAI 11, 3–10 (2011)