A Survey of the Model Transfer Approaches to Cross-Lingual Dependency Parsing

Author:

Das Ayan1,Sarkar Sudeshna1

Affiliation:

1. Indian Institute of Technology Kharagpur, Kharagpur, West Bengal, India

Abstract

Cross-lingual dependency parsing approaches have been employed to develop dependency parsers for the languages for which little or no treebanks are available using the treebanks of other languages. A language for which the cross-lingual parser is developed is usually referred to as the target language and the language whose treebank is used to train the cross-lingual parser model is referred to as the source language. The cross-lingual parsing approaches for dependency parsing may be broadly classified into three categories: model transfer, annotation projection, and treebank translation. This survey provides an overview of the various aspects of the model transfer approach of cross-lingual dependency parsing. In this survey, we present a classification of the model transfer approaches based on the different aspects of the method. We discuss some of the challenges associated with cross-lingual parsing and the techniques used to address these challenges. In order to address the difference in vocabulary between two languages, some approaches use only non-lexical features of the words to train the models while others use shared representations of the words. Some approaches address the morphological differences by chunk-level transfer rather than word-level transfer. The syntactic differences between the source and target languages are sometimes addressed by transforming the source language treebanks or by combining the resources of multiple source languages. Besides cross-lingual transfer parser models may be developed for a specific target language or it may be trained to parse sentences of multiple languages. With respect to the above-mentioned aspects, we look at the different ways in which the methods can be classified. We further classify and discuss the different approaches from the perspective of the corresponding aspects. We also demonstrate the performance of the transferred models under different settings corresponding to the classification aspects on a common dataset.

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Cited by 6 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Transferring Sentiment Cross-Lingually within and across Same-Family Languages;Applied Sciences;2024-06-28

2. Swahili Speech Dataset Development and Improved Pre-training Method for Spoken Digit Recognition;ACM Transactions on Asian and Low-Resource Language Information Processing;2023-07-20

3. Filtering and Extended Vocabulary based Translation for Low-resource Language Pair of Sanskrit-Hindi;ACM Transactions on Asian and Low-Resource Language Information Processing;2023-04-12

4. Building Indonesian Dependency Parser Using Cross-lingual Transfer Learning;2022 International Conference on Asian Language Processing (IALP);2022-10-27

5. Cross-lingual transfer learning for relation extraction using Universal Dependencies;Computer Speech & Language;2022-01

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3