Abstract
The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a short survey of more recent trends in the field.
Cited by
161 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Self-adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework;ACM Transactions on Autonomous and Adaptive Systems;2024-09-13
2. Figure Credits;Concepts at the Interface;2024-09-05
3. Concluding Thoughts;Concepts at the Interface;2024-09-05
4. Metacognition;Concepts at the Interface;2024-09-05
5. Representational Structure;Concepts at the Interface;2024-09-05