Abstract
AbstractSome languages have very few NLP resources, while many of them are closely related to better-resourced languages. This paper explores how the similarity between the languages can be utilised by porting resources from better- to lesser-resourced languages. The paper introduces a way of building a representation shared across related languages by combining cross-lingual embedding methods with a lexical similarity measure which is based on the weighted Levenshtein distance. One of the outcomes of the experiments is a Panslavonic embedding space for nine Balto-Slavonic languages. The paper demonstrates that the resulting embedding space helps in such applications as morphological prediction, named-entity recognition and genre classification.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Linguistics and Language,Language and Linguistics,Software
Reference47 articles.
1. Neural Architectures for Named Entity Recognition
2. Hubs in space: Popular nearest neighbors in high-dimensional data;Radovanović;Journal of Machine Learning Research,2010
3. Nonparametric Rotations for Sphere-Sphere Regression
4. Stable classification of text genres;Petrenz;Computational Linguistics,2010
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Fine-tuning language models to recognize semantic relations;Language Resources and Evaluation;2023-07-23
2. Conclusions and Future Research;Building and Using Comparable Corpora for Multilingual Natural Language Processing;2023
3. Other Applications of Comparable Corpora;Building and Using Comparable Corpora for Multilingual Natural Language Processing;2023
4. Induction of Bilingual Dictionaries;Building and Using Comparable Corpora for Multilingual Natural Language Processing;2023
5. East Slavic interference in L2 Polish: state of the art and future perspectives;Applied Linguistics Papers;2022-12-17