Abstract
AbstractWhen the phenomena of interest are in need of explanation, we are often in search of the underlying root causes. Causal inference provides tools for identifying these root causes—by performing interventions on suitably chosen variables we can observe down-stream effects in the outcome variable of interest. On the other hand, argumentation as an approach of attributing observed outcomes to specific factors, naturally lends itself as a tool for determining the most plausible explanation. We can further improve the robustness of such explanations by measuring their likelihood within a mutually agreed-upon causal model. For this, typically one of in-principle two distinct types of counterfactual explanations is used: interventional counterfactuals, which treat changes as deliberate interventions to the causal system, and backtracking counterfactuals, which attribute changes exclusively to exogenous factors. Although both frameworks share the common goal of inferring true causal factors, they fundamentally differ in their conception of counterfactuals. Here, we present the first approach that decides when to expect interventional and when to opt for backtracking counterfactuals.
Publisher
Springer Nature Switzerland
Reference17 articles.
1. Aldrich, J.: Correlations genuine and spurious in Pearson and Yule. Stat. Sci. 364–376 (1995)
2. Baroni, P., Caminada, M., Giacomin, M.: An introduction to argumentation semantics. Knowl. Eng. Rev. 26(4), 365–410 (2011)
3. Beckers, S., Halpern, J.Y.: Abstracting causal models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2678–2685 (2019)
4. Bex, F., Renooij, S., et al.: From arguments to constraints on a Bayesian network. In: COMMA, pp. 95–106 (2016)
5. Bochman, A.: Default logic as a species of causal reasoning. In: Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 19, pp. 117–126 (2023)