Abstract
AbstractThis paper analyses whether current explainable AI (XAI) techniques can help to address taxpayer concerns about the use of AI in taxation. As tax authorities around the world increase their use of AI-based techniques, taxpayers are increasingly at a loss about whether and how the ensuing decisions follow the procedures required by law and respect their substantive rights. The use of XAI has been proposed as a response to this issue, but it is still an open question whether current XAI techniques are enough to meet existing legal requirements. The paper approaches this question in the context of a case study: a prototype tax fraud detector trained on an anonymized dataset of real-world cases handled by the Buenos Aires (Argentina) tax authority. The decisions produced by this detector are explained through the use of various classification methods, and the outputs of these explanation models are evaluated on their explanatory power and on their compliance with the legal obligation that tax authorities provide the rationale behind their decision-making. We conclude the paper by suggesting technical and legal approaches for designing explanation mechanisms that meet the needs of legal explanation in the tax domain.
Funder
IBM-Notre Dame University Tech Ethics Lab
Fundacion Carolina
European University Institute - Fiesole
Publisher
Springer Science and Business Media LLC
Reference66 articles.
1. Adibi J, Cohen PR, Morrison CT (2004) Measuring confidence intervals in link discovery: a bootstrap approach. Proceedings of the ACM Special Interest Group on Knowledge Discovery and Data Mining (ACM-SIGKDD-04)
2. Agrawal T, Agrawal T (2021) Hyperparameter optimization using scikit-learn. Hyperparameter Optimization in Machine Learning: Make Your Machine Learning and Deep Learning Models More Efficient, 31–51
3. Almada, M (2019) Human intervention in automated decision-making: toward the construction of contestable systems. In: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law - ICAIL ’19, 2–11. Montreal, QC, Canada: ACM Press. https://doi.org/10.1145/3322640.3326699.
4. Alon-Barkat S, Busuioc M (2023) Human–AI interactions in public sector decision making:‘automation bias’ and ‘selective adherence’ to algorithmic advice. J Public Administr Res Theory 33(1):153–169
5. Amnesty International. 2021. “Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal.” Amnesty International. October 25, 2021. https://www.amnesty.org/en/documents/eur35/4686/2021/en/.