Affiliation:
1. Department of Computer Science, Emory University, Atlanta, GA 30322 USA. han.he@emory.edu
2. Department of Computer Science, Emory University, Atlanta, GA 30322 USA. jinho.choi@emory.edu
Abstract
Abstract
Sequence-to-Sequence (S2S) models have achieved remarkable success on various text generation tasks. However, learning complex structures with S2S models remains challenging as external neural modules and additional lexicons are often supplemented to predict non-textual outputs. We present a systematic study of S2S modeling using contained decoding on four core tasks: part-of-speech tagging, named entity recognition, constituency, and dependency parsing, to develop efficient exploitation methods costing zero extra parameters. In particular, 3 lexically diverse linearization schemas and corresponding constrained decoding methods are designed and evaluated. Experiments show that although more lexicalized schemas yield longer output sequences that require heavier training, their sequences being closer to natural language makes them easier to learn. Moreover, S2S models using our constrained decoding outperform other S2S approaches using external resources. Our best models perform better than or comparably to the state-of-the-art for all 4 tasks, lighting a promise for S2S models to generate non-sequential structures.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference64 articles.
1. Pooled contextualized embeddings for named entity recognition;Akbik,2019
2. Contextual String Embeddings for Sequence Labeling;Akbik,2018
3. Neural machine translation by jointly learning to align and translate;Bahdanau,2015
4. Graph pre-training for AMR parsing and generation;Bai,2022
5. Semantic parsing via paraphrasing;Berant,2014