1. Abzianidze, L. (2020). Learning as abduction: Trainable natural logic theorem prover for natural language inference. In Proceedings of the 9th joint conference on lexical and computational semantics (pp. 20–31). Association for Computational Linguistics, Barcelona, Spain
2. Beltagy, I., Roller, S., Cheng, P., Erk, K., & Mooney, R. J. (2016). Representing meaning with a combination of logical and distributional models. Computational Linguistics, 42(4), 763–808.
3. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, FAccT ’21 (pp. 610–623). Association for Computing Machinery, New York, NY, USA
4. Bernardy, J.-P., & Chatzikyriakidis, S. (2017). A type-theoretical system for the FraCaS test suite: Grammatical framework meets Coq. In Proceedings of the 12th international conference on computational semantics (IWCS)–long papers, Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), Montpellier, France.
5. Bernardy, J.-P., & Chatzikyriakidis, S. (2019). What kind of natural language inference are NLP systems learning: Is this enough? In ICAART (2) (pp. 919–931).