1. Abdul-Mageed, M. and Ungar, L. (2017). “EmoNet: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks.” In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 718–728, Vancouver, Canada. Association for Computational Linguistics.
2. Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., Van Den Driessche, G. B., Lespiau, J.-B., Damoc, B., Clark, A., et al. (2022). “Improving Language Models by Retrieving from Trillions of Tokens.” In International Conference on Machine Learning, pp. 2206–2240. PMLR.
3. Busso, C., Bulut, M., Lee, C.-C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J. N., Lee, S., and Narayanan, S. S. (2008). “IEMOCAP: Interactive Emotional Dyadic Motion Capture Database.” Language Resources and Evaluation, 42 (4), p. 335.
4. Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014). “Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling.” arXiv preprint arXiv:1412.3555.
5. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” arXiv preprint arXiv:1810.04805.