Author:
Chen Xinran,Zhao Yuran,Guo Jianming,Duan Sufeng,Liu Gongshen
Publisher
Springer Nature Singapore
Reference27 articles.
1. Akoury, N., Krishna, K., Iyyer, M.: Syntactically supervised transformers for faster neural machine translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1269–1281 (2019)
2. Chen, K., Wang, R., Utiyama, M., Sumita, E., Zhao, T.: Syntax-directed attention for neural machine translation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4792–4799 (2018)
3. Ghazvininejad, M., Levy, O., Liu, Y., Zettlemoyer, L.: Mask-predict: parallel decoding of conditional masked language models. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6112–6121 (2019)
4. Gu, J., Bradbury, J., Xiong, C., Li, V.O.K., Socher, R.: Non-autoregressive neural machine translation. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3 2018, Conference Track Proceedings (2018)
5. Gu, J., Wang, C., Junbo, J.Z.: Levenshtein transformer. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 11181–11191 (2019)