1. Representation Learning with Statistical Independence to Mitigate Bias
2. Haswanth Aekula Sugam Garg and Animesh Gupta. 2021. [RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation. CoRR abs/2104.06973(2021). arXiv:2104.06973 https://arxiv.org/abs/2104.06973 Haswanth Aekula Sugam Garg and Animesh Gupta. 2021. [RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation. CoRR abs/2104.06973(2021). arXiv:2104.06973 https://arxiv.org/abs/2104.06973
3. Sharat Agarwal , Sumanyu Muku , Saket Anand , and Chetan Arora . 2022 . Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE. Sharat Agarwal, Sumanyu Muku, Saket Anand, and Chetan Arora. 2022. Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE.
4. Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure
5. Carolyn Ashurst , Emmie Hine , Paul Sedille , and Alexis Carlier . 2022 . AI Ethics Statements: Analysis and Lessons Learnt from NeurIPS Broader Impact Statements . In 2022 ACM Conference on Fairness, Accountability, and Transparency. 2047–2056 . Carolyn Ashurst, Emmie Hine, Paul Sedille, and Alexis Carlier. 2022. AI Ethics Statements: Analysis and Lessons Learnt from NeurIPS Broader Impact Statements. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 2047–2056.