1. VisualBERT: A simple and performant baseline for vision and language;li;arXiv 1908 03557,2019
2. Unified vision-language pre-training for image captioning and VQA;zhou;Proc AAAI,2019
3. Cross-lingual language model pretraining;lample;Proc NeurIPS,2019
4. Von Mises–Fisher loss for training sequence to sequence models with continuous outputs;kumar;Proc ICLR,2019
5. Improving Neural Machine Translation Models with Monolingual Data