Affiliation:
1. Nanyang Technological University, Singapore
2. Singapore Management University, Singapore
3. IHPC and CFAR, Agency for Science, Technology and Research, Singapore
4. Beihang University, China
5. University of Alberta, Canada and The University of Tokyo, Japan
6. Zhejiang Sci-Tech University, China, and Nanyang Technological University, Singapore
Abstract
Deep Neural Networks (DNNs) have achieved tremendous success in many applications, while it has been demonstrated that DNNs can exhibit some undesirable behaviors on concerns such as robustness, privacy, and other trustworthiness issues. Among them, fairness (i.e., non-discrimination) is one important property, especially when they are applied to some sensitive applications (e.g., finance and employment). However, DNNs easily learn spurious correlations between protected attributes (e.g., age, gender, race) and the classification task and develop discriminatory behaviors if the training data is imbalanced. Such discriminatory decisions in sensitive applications would introduce severe social impacts. To expose potential discrimination problems in DNNs before putting them in use, some testing techniques have been proposed to identify the discriminatory instances (i.e., instances that show defined discrimination
1
). However, how to repair DNNs after detecting such discrimination is still challenging. Existing techniques mainly rely on retraining on a large number of discriminatory instances generated by testing methods, which requires huge time overhead and makes the repairing inefficient.
In this work, we propose the method
Faire
to effectively and efficiently repair the fairness issues of DNNs, without using additional data (e.g., discriminatory instances). Our basic idea is inspired by the traditional program repair method that synthesizes proper condition checking. To repair traditional programs, a typical method is to localize the program defects and repair the program logic by adding condition checking. Similarly, for DNNs, we try to understand the unfair logic and reformulate it with well-designed condition checking. In this article, we synthesize the condition that can reduce the effect of features relevant to the protected attributes in the DNN. Specifically, we first perform the neuron-based analysis and check the functionalities of neurons to identify neurons whose outputs could be regarded as features relevant to protected attributes and original tasks. Then a new condition layer is added after each hidden layer to penalize neurons that are accountable for the protected features (i.e., intermediate features relevant to protected attributes) and promote neurons that are accountable for the non-protected features (i.e., intermediate features relevant to original tasks). In sum, the repair rate
2
of
Faire
reaches up to more than 99%, which outperforms other methods, and the whole repairing process only takes no more than 340 s. The evaluation results demonstrate that our approach can effectively and efficiently repair the individual discriminatory instances of the target model.
Funder
National Research Foundation, Singapore
Cyber Security Agency under its National Cybersecurity R&D Programme
Ministry of Education, Singapore
Academic Research Tier 3
A*STAR Centre for Frontier AI Research
National Research Foundation Singapore and the National Research Foundation, Singapore
DSO National Laboratories
AI Singapore Programme
Natural Sciences and Engineering Research Council of Canada
Publisher
Association for Computing Machinery (ACM)
Reference60 articles.
1. Martín Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Mané Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Viégas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Retrieved from https://www.tensorflow.org/Software available from tensorflow.org.
2. Aniya Aggarwal Pranay Lohia Seema Nagar Kuntal Dey and Diptikalyan Saha. 2018. Automated test generation to detect individual discrimination in AI models. Retrieved from http://arxiv.org/abs/1809.03260
3. Black box fairness testing of machine learning models
4. Concrete problems in AI safety;Amodei Dario;R,2016
5. R FairRepair
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. RUNNER: Responsible UNfair NEuron Repair for Enhancing Deep Neural Network Fairness;Proceedings of the 46th IEEE/ACM International Conference on Software Engineering;2024-02-06