1. Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2016. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015. arXiv:1409.0473v7 [cs.CL]. DOI: https://doi.org/10.48550/arXiv.1409.0473.
2. Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5: 135–46. https://aclanthology.org/Q17-1010.pdf (accessed December 28, 2022).
3. Bouma, Gerlof. 2009. Normalized (Pointwise) Mutual information in collocation extraction. In From Form to Meaning: Processing Texts Automatically: Proceedings of the Biennial GSCL Conference 2009, eds. Christian Chiarcos, Richard Eckart de Castilho and Manfred Stede, 31–40. Tübingen: Gunter Narr.
4. Camacho-Collados, José, and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. Journal of Artificial Intelligence Research 63: 743–88. DOI: https://doi.org/10.1613/jair.1.11259.
5. Diniz da Costa, Alexandre, Mateus Coutinho Marim, Ely Edison da Silva Matos, and Tiago Timponi Torrent. 2022. Domain Adaptation in Neural Machine Translation using a Qualia-Enriched FrameNet. In Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), eds. Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk and Stelios Piperidis, 1–12. Paris: European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2022/LREC-2022.pdf (accessed December 28, 2022).