Abstract
AbstractScientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to explain the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired, we introduce failure explanation techniques for inductive logic programming. Given a hypothesis represented as a logic program, we test it on examples. If a hypothesis fails, we explain the failure in terms of failing sub-programs. In case a positive example fails, we identify failing sub-programs at the granularity of literals. We introduce a failure explanation algorithm based on analysing branches of SLD-trees. We integrate a meta-interpreter based implementation of this algorithm with the test-stage of the Popper ILP system. We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space. Our experimental results show that explaining failures can drastically reduce hypothesis space exploration and learning times.
Funder
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference44 articles.
1. Ahlgren, J., & Yuen, S.Y. (2013). Efficient program synthesis using constraint satisfaction in inductive logic programming. JMLR.
2. Blockeel, H., & De Raedt, L. (1998). Top-down induction of first-order logical decision trees. AIJ.
3. Bundy, A., & Mitrovic, B. (2016). Reformation: A domain-independent algorithm for theory repair. Technical report, University of Edinburgh.
4. Caballero, R., Riesco, A., & Silva, J. (2017). A survey of algorithmic debugging. ACM Computing Surveys, 50, 1–35.
5. Cheney, J., Chiticariu, L., & Tan, W. C. (2009). Provenance in databases: Why, how, and where. Found. Trends Databases, 1, 379–474.