Affiliation:
1. Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian 223003, China
Abstract
Recent advances in pretraining language models have obtained state-of-the-art results in various natural language processing tasks. However, these huge pretraining language models are difficult to be used in practical applications, such as mobile devices and embedded devices. Moreover, there is no pretraining language model for the chemical industry. In this work, we propose a method to pretrain a smaller language representation model of the chemical industry domain. First, a huge number of chemical industry texts are used as pretraining corpus, and nontraditional knowledge distillation technology is used to build a simplified model to learn the knowledge in the BERT model. By learning the embedded layer, the middle layer, and the prediction layer at different stages, the simplified model not only learns the probability distribution of the prediction layer but also learns the embedded layer and the middle layer at the same time, to acquire the learning ability of BERT model. Finally, it is applied to the downstream tasks. Experiments show that, compared with the current BERT model distillation method, our method makes full use of the rich feature knowledge in the middle layer of the teacher model while building a student model based on the BiLSTM architecture, which effectively solves the problem that the traditional student model based on the transformer architecture is too large and improves the accuracy of the language model in the chemical domain.
Funder
National Basic Research Program of China
Subject
Computer Science Applications,Software
Reference37 articles.
1. Bert: pre-training of deep bidirectional transformers for language understanding;J. Devlin,2018
2. Xlnet: generalized autoregressive pretraining for language understanding;Z. Yang
3. Roberta: a robustly optimized bert pretraining approach;Y. Liu,2019
4. SpanBERT: Improving Pre-training by Representing and Predicting Spans
5. Albert: a lite bert for self-supervised learning of language representations;Z. Lan,2019
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献