Affiliation:
1. Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
2. Westlake University
3. Institute of Advanced Technology, Westlake Institute for Advanced Study
Abstract
Multi-lingual contextualized embeddings, such as multilingual-BERT (mBERT), have
shown success in a variety of zero-shot cross-lingual tasks.
However, these models are limited by having inconsistent contextualized representations of subwords across different languages.
Existing work addresses this issue by bilingual projection and fine-tuning technique.
We propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information.
Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages.
Experimental results on five tasks with 19 languages show that our method leads to significantly improved performances for all the tasks compared with mBERT.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
33 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献