1. Ahmad WU, Chakraborty S, Ray B, Chang K (2021) Unified pre-training for program understanding and generation. In: NAACL-HLT. Association for Computational Linguistics, pp 2655–2668
2. Alon U, Brody S, Levy O, Yahav E (2019) code2seq: generating sequences from structured representations of code. In: ICLR
3. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: ICLR
4. Banerjee S, Lavie A (2005) METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: IEEValuation@ACL. Association for Computational Linguistics, pp 65–72
5. Barnett JG, Gathuru CK, Soldano LS, McIntosh S (2016) The relationship between commit message detail and defect proneness in java projects on github. In: MSR. ACM, pp 496–499