Abstract
Progress in pre-trained language models has led to a surge of impressive results on downstream tasks for natural language understanding. Recent work on probing pre-trained language models uncovered a wide range of linguistic properties encoded in their contextualized representations. However, it is unclear whether they encode semantic knowledge that is crucial to symbolic inference methods. We propose a methodology for probing knowledge for inference that logical systems require but often lack in pre-trained language model representations. Our probing datasets cover a list of key types of knowledge used by many symbolic inference systems. We find that (i) pre-trained language models do encode several types of knowledge for inference, but there are also some types of knowledge for inference that are not encoded, (ii) language models can effectively learn missing knowledge for inference through fine-tuning. Overall, our findings provide insights into which aspects of knowledge for inference language models and their pre-training procedures capture. Moreover, we have demonstrated language models' potential as semantic and background knowledge bases for supporting symbolic inference methods.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献