Affiliation:
1. RIKEN Center for Advanced Intelligence Project, Japan. noriki.nishida@riken.jp
2. RIKEN Center for Advanced Intelligence Project, Japan. yuji.matsumoto@riken.jp
Abstract
Abstract
Discourse parsing has been studied for decades. However, it still remains challenging to utilize discourse parsing for real-world applications because the parsing accuracy degrades significantly on out-of-domain text. In this paper, we report and discuss the effectiveness and limitations of bootstrapping methods for adapting modern BERT-based discourse dependency parsers to out-of-domain text without relying on additional human supervision. Specifically, we investigate self-training, co-training, tri-training, and asymmetric tri-training of graph-based and transition-based discourse dependency parsing models, as well as confidence measures and sample selection criteria in two adaptation scenarios: monologue adaptation between scientific disciplines and dialogue genre adaptation. We also release COVID-19 Discourse Dependency Treebank (COVID19-DTB), a new manually annotated resource for discourse dependency parsing of biomedical paper abstracts. The experimental results show that bootstrapping is significantly and consistently effective for unsupervised domain adaptation of discourse dependency parsing, but the low coverage of accurately predicted pseudo labels is a bottleneck for further improvement. We show that active learning can mitigate this limitation.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference81 articles.
1. Bootstrapping;Abney,2002
2. Discourse parsing for multi-party chat dialogues;Afantenos,2015
3. Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus;Asher,2016
4. Data programming for learning discourse structure;Badene,2019