Gender bias in machine learning for sentiment analysis

Author:

Thelwall MikeORCID

Abstract

Purpose The purpose of this paper is to investigate whether machine learning induces gender biases in the sense of results that are more accurate for male authors or for female authors. It also investigates whether training separate male and female variants could improve the accuracy of machine learning for sentiment analysis. Design/methodology/approach This paper uses ratings-balanced sets of reviews of restaurants and hotels (3 sets) to train algorithms with and without gender selection. Findings Accuracy is higher on female-authored reviews than on male-authored reviews for all data sets, so applications of sentiment analysis using mixed gender data sets will over represent the opinions of women. Training on same gender data improves performance less than having additional data from both genders. Practical implications End users of sentiment analysis should be aware that its small gender biases can affect the conclusions drawn from it and apply correction factors when necessary. Users of systems that incorporate sentiment analysis should be aware that performance will vary by author gender. Developers do not need to create gender-specific algorithms unless they have more training data than their system can cope with. Originality/value This is the first demonstration of gender bias in machine learning sentiment analysis.

Publisher

Emerald

Subject

Library and Information Sciences,Computer Science Applications,Information Systems

Reference50 articles.

1. Altmetric: enriching scholarly content with article-level discussion and metrics;Learned Publishing,2013

2. Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability;New Media & Society,2018

3. Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V. and Kalai, A.T. (2016), “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings”, Advances in Neural Information Processing Systems 29 (NIPS2016), Neural Information Processing Systems Foundation, Inc., Barcelona, pp. 4349-4357.

4. Burger, J.D., Henderson, J., Kim, G. and Zarrella, G. (2011), “Discriminating gender on Twitter”, Proceedings of the Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, pp. 1301-1309.

5. Recognizing faces across continents: the effect of within-race variations on the own-race bias in face recognition;Psychonomic Bulletin & Review,2008

Cited by 15 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3