1. Apidianaki, M., Soler, A.G.: ALL dolphins are intelligent and SOME are friendly: probing BERT for nouns’ semantic properties and their prototypicality. CoRR abs/2110.06376 (2021)
2. Bosselut, A., Rashkin, H., Sap, M., Malaviya, C., Celikyilmaz, A., Choi, Y.: COMET: commonsense transformers for automatic knowledge graph construction. In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) ACL 2019, Florence, Italy, 28 July–2 August 2019, Volume 1: Long Papers, pp. 4762–4779 (2019)
3. Bouraoui, Z., Camacho-Collados, J., Schockaert, S.: Inducing relational knowledge from BERT. In: AAAI 2020, IAAI 2020, EAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 7456–7463 (2020)
4. Brown, T.B., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 6–12 December 2020, Virtual (2020)
5. Davison, J., Feldman, J., Rush, A.M.: Commonsense knowledge mining from pretrained models. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) EMNLP-IJCNLP 2019, Hong Kong, China, 3–7 November 2019, pp. 1173–1178 (2019)