1. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Friedler, S.A., Wilson, C. (eds.) Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Proceedings of Machine Learning Research, vol. 81, pp. 77–91. PMLR, New York, February 2018.
http://proceedings.mlr.press/v81/buolamwini18a.html
2. Carter, S., Armstrong, Z., Schubert, L., Johnson, I., Olah, C.: Activation atlas. Distill (2019).
https://doi.org/10.23915/distill.00015
,
https://distill.pub/2019/activation-atlas
3. Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., Tsaneva-Atanasova, K.: Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 28(3), 231–237 (2019).
https://doi.org/10.1136/bmjqs-2018-008370
.
https://qualitysafety.bmj.com/content/28/3/231
4. Feghahati, A., Shelton, C.R., Pazzani, M.J., Tang, K.: CDeepEx: contrastive deep explanations (2019).
https://openreview.net/forum?id=HyNmRiCqtm
5. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an approach to evaluating interpretability of machine learning. CoRR abs/1806.00069 (2018).
http://arxiv.org/abs/1806.00069