1. Alvin, C., Gulwani, S., Majumdar, R., & Mukhopadhyay, S., (2017). Synthesis of problems for shaded area geometry reasoning. In Aied (pp. 455–458).
2. Bakman, Y., (2007). Robust Understanding of Word Problems with Extraneous Information. ArXiv Mathematics e-prints.
3. Cao, Y., Hong, F., Li, H., & Luo, P., (2021). A bottom-up DAG structure extraction model for math word problems. In Aaai (pp. 39–46). AAAI Press.
4. Clark, K., Luong, M., Le, Q. V., & Manning, C. D., (2020). ELECTRA: pre-training text encoders as discriminators rather than generators. In Iclr. OpenReview. net.
5. Devlin, J., Chang, M., Lee, K., & Toutanova, K., (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In J. Burstein,C. Doran, & T. Solorio (Eds.), Naacl-hlt (pp. 4171–4186). Association for Computational Linguistics.