Affiliation:
1. Key Laboratory of Language and Cultural Computing of Ministry of Education, Lanzhou 730030, China
2. Key Laboratory of China’s Ethnic Languages and Intelligent Processing of Gansu Province, Northwest Minzu University, Lanzhou 730030, China
Abstract
To address the problems of the insufficient semantic fusion between text and questions and the lack of consideration of global semantic information encountered in machine reading comprehension models, we proposed a machine reading comprehension model called BERT_hybrid based on the BERT and hybrid attention mechanism. In this model, BERT is utilized to separately map the text and questions into the feature space. Through the integration of Bi-LSTM, an attention mechanism, and a self-attention mechanism, the proposed model achieves a comprehensive semantic fusion between text and questions. The probability distribution of answers is computed using Softmax. The experimental results on the public dataset DuReader demonstrate that the proposed model achieves improvements in BLEU-4 and ROUGE-L scores compared to existing models. Furthermore, to validate the effectiveness of the proposed model design, we analyze the factors influencing the model’s performance.
Funder
National Natural Science Foundation of China
Fundamental Research Funds for the Central Universities