Affiliation:
1. American University of Beirut, Lebanon
Abstract
The rise of deep learning techniques has produced significantly better predictions in several fields which lead to a widespread applicability in healthcare, finance, and autonomous systems. The success of such models comes at the expense of a trackable and transparent decision-making process in areas with legal and ethical implications. Given the criticality of the decisions in such areas, governments and industries are making sizeable investments in the accountability aspect in AI. Accordingly, the nascent field of explainable and fair AI should be a focal point in the discussion of emergent applications especially in high-stake fields. This chapter covers the terminology of accountable AI while focusing on two main aspects: explainability and fairness. The chapter motivates the use cases of each aspect and covers state-of-the-art methods in interpretable AI and methods that are used to evaluate the fairness of machine learning models, and to detect any underlying bias and mitigate it.
Reference90 articles.
1. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2. Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI
3. Getting a CLUE: A Method for Explaining Uncertainty Estimates.;J.Antoran,2020
4. Asano, Y. M., Rupprecht, C., Zisserman, A., & Vedaldi, A. (2021). PASS: An ImageNet replacement for self-supervised pretraining without humans. Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
5. How to explain individual classification decisions.;D.Baehrens;Journal of Machine Learning Research,2010