Modeling Extractive Question Answering Using Encoder-Decoder Models with Constrained Decoding and Evaluation-Based Reinforcement Learning
-
Published:2023-03-27
Issue:7
Volume:11
Page:1624
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Li Shaobo1ORCID, Sun Chengjie1ORCID, Liu Bingquan1, Liu Yuanchao1, Ji Zhenzhou1
Affiliation:
1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Abstract
Extractive Question Answering, also known as machine reading comprehension, can be used to evaluate how well a computer comprehends human language. It is a valuable topic with many applications, such as in chatbots and personal assistants. End-to-end neural-network-based models have achieved remarkable performance on these tasks. The most frequently used approach to extract answers with neural networks is to predict the answer’s start and end positions in the document, independently or jointly. In this paper, we propose another approach that considers all words in an answer jointly. We introduce an encoder-decoder model to learn from all words in the answer. This differs from previous works. which usually focused on the start and end and ignored the words in the middle. To help the encoder-decoder model to perform this task better, we employ evaluation-based reinforcement learning with different reward functions. The results of an experiment on the SQuAD dataset show that the proposed method can outperform the baseline in terms of F1 scores, offering another potential approach to solve the extractive QA task.
Funder
National Key Research and Development Project National Natural Science Foundation of China Interdisciplinary Development Program of Harbin Institute of Technology Fundamental Research Funds for the Central Universities
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference63 articles.
1. A Survey of Text Question Answering Techniques;Gupta;Int. J. Comput. Appl.,2012 2. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S.R. (2018, January 1). GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium. 3. An Introduction to Neural Information Retrieval;Mitra;Found. Trends Inf. Retr.,2018 4. Bowman, S.R., Angeli, G., Potts, C., and Manning, C.D. (2015, January 17–21). A large annotated corpus for learning natural language inference. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. 5. Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. (2021, January 3–7). Measuring Massive Multitask Language Understanding. Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|