Author:
Goldsteen Abigail,Farkash Ariel,Hind Michael
Reference67 articles.
1. Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the ACM SIGSAC conference on computer and communications security (pp. 308–318).
2. Ackerman, S., Raz, O., & Zalmanovici, M. (2021). FreaAI: Automated extraction of data slices to test machine learning models. https://arxiv.org/abs/2108.05620.
3. Bellamy, R.K. E., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K.N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K.R., & Zhang, Y. (2018). AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. https://doi.org/10.48550/arXiv.1810.01943.
4. Agarwal, S. (2021). Trade-offs between fairness and privacy in machine learning. In IJCAI 2021 workshop on AI for social good.
5. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity;Arnold;IBM Journal of Research & Development,2019