Abstract
There is usually implicit information in natural language texts that can be deduced from the text using background or common-sense knowledge. Recognition of textual entailments falls into this category. It involves receiving two pieces of text and deciding whether the second one entails or contradicts the first one, or if there is no such relation between the two. One of the types of entailment is causality, where the event or situation in one text can cause the event or situation in another text. The aim of this paper is to improve textual entailment by identifying causality and contradiction relationships between pieces of text using knowledge sources. To achieve this, we employ a knowledge-based method that recognizes textual entailments through the use of linguistic and common-sense knowledge (ontologies), with a particular emphasis on implicit or ambiguous causal and contradictory relationships. For this purpose, we first developed two ontologies, including causality and contradiction relations among concepts. Then, two concurrent paths in a pipeline are traversed. In the first path, a rule-based system is utilized to identify simple contradiction relations. The rules are extracted semi-automatically through data mining methods. In the second path, the provided knowledge of causality and contradiction is injected into language models using prompt engineering. Subsequently, the enriched language models undergo fine-tuning with entailment datasets to carry out textual entailment tasks. The results indicate that the combination of the rule-based method and enriched models with causality and contradiction knowledge yields the best performance. Our leading system demonstrates an accuracy of 87.5% on RTE tasks, trained and tested on the FarsTail dataset. This reflects a notable 3.5% improvement over baselines and state-of-the-art systems.