Author:
Ioannides Georgios,Jadhav Aishwarya,Sharma Aditi,Navali Samarth,Black Alan W.
Abstract
AbstractThis work presents a comprehensive approach to reduce bias in word embedding vectors and evaluate the impact on various Natural Language Processing (NLP) tasks. Two GloVe variations (840B and 50) are debiased by identifying the gender direction in the word embedding space and then removing or reducing the gender component from the embeddings of target words, while preserving useful semantic information. Their gender bias is assessed through the Word Embedding Association Test. The performance of co-reference resolution and text classification models trained on both original and debiased embeddings is evaluated in terms of accuracy. A compressed co-reference resolution model is examined to gauge the effectiveness of debiasing techniques on resource-efficient models. To the best of the authors’ knowledge, this is the first attempt to apply compression techniques to debiased models. By analyzing the context preservation of debiased embeddings using a Twitter misinformation dataset, this study contributes valuable insights into the practical implications of debiasing methods for real-world applications such as person profiling.
Publisher
Springer Science and Business Media LLC