Affiliation:
1. Department of Computer Science, University of Cambridge, United Kingdom. ak2329@cam.ac.uk
2. Meta AI and University College London, United Kingdom. sriedel@fb.com
3. Department of Computer Science, University of Cambridge, United Kingdom. av308@cam.ac.uk
Abstract
Abstract
Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence retrieved, each marked with a natural logic operator. Claim veracity is determined solely based on the sequence of these operators. Hence, these proofs are faithful explanations, and this makes ProoFVer faithful by construction. Currently, ProoFVer has the highest label accuracy and the second best score in the FEVER leaderboard. Furthermore, it improves by 13.21% points over the next best model on a dataset with counterfactual instances, demonstrating its robustness. As explanations, the proofs show better overlap with human rationales than attention-based highlights and the proofs help humans predict model decisions correctly more often than using the evidence directly.1
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference76 articles.
1. LangPro: Natural language theorem prover;Abzianidze,2017
2. Lasha
Abzianidze
. 2017b. A Natural Proof System for Natural Language. Ph.D. thesis, Tilburg University.
3. Explainable fact checking with probabilistic answer set programming;Ahmadi,2019
4. FLAIR: An easy-to-use framework for state-of-the-art NLP;Akbik,2019
5. FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information;Aly,2021
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献