1. Adel, H., Schütze, H.: Type-aware convolutional neural networks for slot filling. J. Artif. Intell. Res. 66, 297–339 (2019)
2. Alivanistos, D., Santamaría, S., Cochez, M., Kalo, J., van Krieken, E., Thanapalasingam, T.: Prompting as probing: using language models for knowledge base construction. In: Singhania, S., Nguyen, T.P., Razniewski, S. (eds.) LM-KBC 2022 Knowledge Base Construction from Pre-trained Language Models 2022, pp. 11–34. CEUR Workshop Proceedings, CEUR-WS.org (2022)
3. Balazevic, I., Allen, C., Hospedales, T.: TuckER: tensor factorization for knowledge graph completion. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5185–5194. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1522. https://aclanthology.org/D19-1522
4. The Information Retrieval Series;K Balog,2018
5. Bentivogli, L., Clark, P., Dagan, I., Giampiccolo, D.: The seventh pascal recognizing textual entailment challenge. In: Theory and Applications of Categories (2011)