1. N. Timalsina, “Indolib: A natural language processing toolkit for low- resource south asian languages,” 2022. Master’s Thesis, Harvard University Division of Continuing Education..
2. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, vol. 21, no. 140, pp. 1–67, 2020.. Available: http://jmlr.org/papers/v21/20-074.html.
3. L. Xue, N. Constant, A. Roberts, M. Kale, R. al-Rfou, A. Siddhant, A. Barua, C. Raffel, “mT5: A Massively Multilingual Pre-Trained Text-to-Text Transformer”. https://doi.org/10.48550/arXiv.2010.11934.
4. JC. Cruz and C. Cheng, “Evaluating Language Model Finetuning Techniques for Low-Resource Languages”. https://arxiv.org/pdf/1907.00409v1.pdf.
5. S. Torres-Ramos and R. E. Garay-Quezada, “A survey on statistical- based parallel corpus alignment,” in Research in Computing Science 90, 2015, pp. pp. 57–76; rec. 2015–01–20; acc. 2015–03–10..