Publisher
Springer Nature Switzerland
Reference52 articles.
1. Abdar, M., et al.: A review of uncertainty quantification in deep learning: techniques, applications and challenges. Inf. Fusion 76, 243–297 (2021). https://doi.org/10.1016/j.inffus.2021.05.008
2. Agarwal, C., et al.: OpenXAI: towards a transparent evaluation of model explanations (2022). https://doi.org/10.48550/arxiv.2206.11104. https://arxiv.org/abs/2206.11104v2
3. Ashoori, M., Weisz, J.D.: In AI we trust? Factors that influence trustworthiness of AI-infused decision-making processes (2019). http://arxiv.org/abs/1912.02675
4. Balog, K., Radlinski, F.: Measuring recommendation explanation quality. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 329–338. ACM, New York (2020). https://doi.org/10.1145/3397271.3401032. https://dl.acm.org/doi/10.1145/3397271.3401032
5. Beckh, K., Müller, S., Rüping, S.: A quantitative human-grounded evaluation process for explainable machine learning. Technical report (2022). http://ceur-ws.org
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献