Fine-Grained Domain Adaptation for Chinese Syntactic Processing

Author:

Zhang Meishan1ORCID,Guo Peiming2ORCID,Jiang Peijie3ORCID,Long Dingkun4ORCID,Sun Yueheng5ORCID,Xu Guangwei4ORCID,Xie Pengjun4ORCID,Zhang Min1ORCID

Affiliation:

1. Institute of Computing and Intelligence, Harbin Institute of Technology (Shenzhen), China

2. College of Intelligence and Computing, Tianjin University, China

3. School of New Media and Communication, Tianjin University, China

4. National Coalition of Independent Scholars, China

5. Institute of Computing and Intelligence, Tianjin University, China

Abstract

Syntactic processing is fundamental to natural language processing. It provides rich and comprehensive syntax information in sentences that could be potentially beneficial for downstream tasks. Recently, pretrained language models have shown great success in Chinese syntactic processing, which typically involves word segmentation, POS tagging, and dependency parsing. However, the on-going research never ends since performance would be degraded drastically when tested on a highly-discrepant domain. This problem is widely accepted as domain adaptation, where the test domain differs from the training domain in supervised learning. Self-training is one promising solution for it, and straightforward source-to-target adaptation has already shown remarkable effectiveness in previous work. While this strategy ignores the fact that sentences of the target domain sentences may have very different gaps from the source training domain. More specifically, sentences with large gaps might fail by direct self-training adaptation. To this end, we propose fine-grained domain adaptation for Chinese syntactic processing in this work, aiming to model the gaps between the source and the target domains accurately and progressively. The key idea is to divide the target domain into fine-grained subdomains by using a specified domain distance metric, and then perform gradual self-training on the subdomains. We further offer an intuitive theoretical illustration based on the theory of Kumar et al. (2020) approximately. In addition, a novel representation learning framework is proposed to encode fine-grained subdomains effectively, aiming to utilize the above idea fully. Experimental results on benchmark datasets show that our method can achieve significant improvements over a variety of baselines.

Funder

National Natural Science Foundation of China

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference84 articles.

1. Shai Ben-David John Blitzer Koby Crammer and Fernando Pereira. 2006. Analysis of representations for domain adaptation. Advances in Neural Information Processing Systems 19 (2006) 137–144.

2. Generating Sentences from a Continuous Space

3. Minmin Chen, Kilian Q. Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Proceedings of the NeurIPS.

4. Exploiting meta features for dependency parsing and part-of-speech tagging

5. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuan-Jing Huang. 2017. Adversarial multi-criteria learning for chinese word segmentation. In Proceedings of the 55th ACL. 1193–1203.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3