1. [ 1 ] Ishaan Arora , Julia Guo , Sarah Ita Levitan , Susan McGregor , and Julia Hirschberg . 2020. A Novel Methodology for Developing Automatic Harassment Classifiers for Twitter . Association for Computational Linguistics (ACL) , 7–15. https://doi.org/10. 1865 3/v1/2020.alw-1.2 10.18653/v1 [1] Ishaan Arora, Julia Guo, Sarah Ita Levitan, Susan McGregor, and Julia Hirschberg. 2020. A Novel Methodology for Developing Automatic Harassment Classifiers for Twitter. Association for Computational Linguistics (ACL), 7–15. https://doi.org/10.18653/v1/2020.alw-1.2
2. [
2
] Thomas Davidson Debasmita Bhattacharya and Ingmar Weber. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. 25–35. https://doi.org/10.18653/v1/w19-3504 10.18653/v1
[2] Thomas Davidson Debasmita Bhattacharya and Ingmar Weber. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. 25–35. https://doi.org/10.18653/v1/w19-3504
3. Handling Bias in Toxic Speech Detection: A Survey
4. [ 4 ] Deepak Kumar , Patrick Gage Kelley , Sunny Consolvo , Joshua Mason , Elie Bursztein , Zakir Durumeric , Kurt Thomas , and Michael Bailey . 2021 . Designing toxic content classification for a diversity of perspectives . In Proceedings of the 17th Symposium on Usable Privacy and Security, SOUPS 2021. 299–317 . https://data.esrg.stanford.edu/study/toxicity-perspectives [4] Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing toxic content classification for a diversity of perspectives. In Proceedings of the 17th Symposium on Usable Privacy and Security, SOUPS 2021. 299–317. https://data.esrg.stanford.edu/study/toxicity-perspectives
5. Misogynoir