Affiliation:
1. University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
2. University of Colorado Boulder, Boulder, CO, USA
Abstract
Volunteer moderators serve as gatekeepers for problematic content, such as racism and other forms of hate speech, on digital platforms. Prior studies have reported volunteer moderators' diverse roles in different governance models, highlighting the tensions between moderators and other stakeholders (e.g., administrative teams and users). Building upon prior research, this paper focuses on how volunteer moderators moderate racist content and how a platform's governance influences these practices. To understand how moderators deal with racist content, we conducted in-depth interviews with 13 moderators from city subreddits on Reddit. We found that moderators heavily relied on AutoMod to regulate racist content and racist user accounts. However, content that was crafted through covert racism and "color-blind'' racial frames was not addressed well. We attributed these challenges in moderating racist content to (1) moderators' concerns of power corruption, (2) arbitrary moderator team structures, and (3) evolving forms of covert racism. Our results demonstrate that decentralized governance on Reddit could not support local efforts to regulate color-blind racism. Finally, we discuss the conceptual and practical ways to disrupt color-blind moderation.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献