Abstract
AbstractOnline mental health spaces require effective content moderation for safety. Whilst policies acknowledge the need for proactive practices and moderator support, expectations and experiences of internet users engaging with self-harm and suicide content online remain unclear. Therefore, this study aimed to explore participant accounts of moderation, moderators and moderating when engaging online with self-harm/suicide (SH/S) related content.Participants in the DELVE study were interviewed about their experiences with SH/S content online. N=14 participants were recruited to interview at baseline, with n=8 completing the 3-month follow-up, and n=7 the 6 month follow-up. Participants were also asked to complete daily diaries of their online use between interviews. Thematic analysis, with deductive coding informed by interview questions, was used to explore perspectives on moderation, moderators and moderating from interview transcripts and diary entries.Three key themes were identified: ‘content reporting behaviour’, exploring factors influencing decisions to report SH/S content; ‘perceptions of having content blocked’, exploring participant experiences and speculative accounts of SH/S content moderation; and ‘content moderation and moderators’, examining participant views on moderation approaches, their own experiences of moderating, and insights for future moderation improvements.This study revealed challenges in moderating SH/S content online, and highlighted inadequacies associated with current procedures. Participants struggled to self-moderate online SH/S spaces, showing the need for proactive platform-level strategies. Additionally, whilst the lived experience of moderators was valued by participants, associated risks emphasised the need for supportive measures. Policymakers and industry leaders should prioritise transparent and consistent moderation practice.Author SummaryIn today’s digital world, ensuring the safety of online mental health spaces is vital. Yet, there’s still a lot we don’t understand about how people experience moderation, moderators, and moderating in self-harm and suicide online spaces. Our study set out to change that by talking to 14 individuals who engage with this content online. Through interviews and diaries, we learned more about their experiences with platform and online community moderation.Our findings showed some important things. Firstly, individuals with declining mental health struggled to use tools that might keep them safe, like reporting content. This emphasised the need for effective moderation in online mental health spaces, to prevent harm. Secondly, unclear communication and inconsistent moderation practices lead to confusion and frustration amongst users who reported content, or had their own content moderated. Improving transparency and consistency will enhance user experiences of moderation online. Lastly, users encouraged the involvement of mental health professionals into online moderating teams, suggesting platforms and online communities should provide training and supervision from professionals to their moderation staff. These findings support our recommendations for ongoing changes to moderation procedures across online platforms.
Publisher
Cold Spring Harbor Laboratory
Reference20 articles.
1. Roberts, S. T . (2019). Behind the screen. Yale University Press.
2. Moderating the public sphere;Human rights in the age of platforms,2019
3. Human-Machine Collaboration for Content Regulation
4. Kiesler, S. , Kraut, R. , and Resnick. P. (2012). Regulating behavior in online communities. In Building Successful Online Communities: Evidence-Based Social Design. MIT Press.
5. The Instagram/Facebook ban on graphic self-harm imagery: A sentiment analysis and topic modeling approach;Policy & Internet,2022