Abstract
Background
Social media platforms are widely used by people suffering from mental illnesses to cope with their conditions. One modality of coping with these conditions is navigating online communities where people can receive emotional support and informational advice. Benefits have been documented in terms of impact on health outcomes. However, the pitfalls are still unknown, as not all content is necessarily helpful or correct. Furthermore, the advent of the COVID-19 pandemic and related problems, such as worsening mental health symptoms, the dissemination of conspiracy narratives, and medical distrust, may have impacted these online communities. The situation in Italy is of particular interest, being the first Western country to experience a nationwide lockdown. Particularly during this challenging time, the beneficial role of community moderators with professional mental health expertise needs to be investigated in terms of uncovering misleading information and regulating communities.
Objective
The aim of the proposed study is to investigate the potentially harmful content found in online communities for mental health symptoms in the Italian language. Besides descriptive information about the content that posts and comments address, this study aims to analyze the content from two viewpoints. The first one compares expert-led and peer-led communities, focusing on differences in misinformation. The second one unravels the impact of the COVID-19 pandemic, not by merely investigating differences in topics but also by investigating the needs expressed by community members.
Methods
A codebook for the content analysis of Facebook communities has been developed, and a content analysis will be conducted on bundles of posts. Among 14 Facebook groups that were interested in participating in this study, two groups were selected for analysis: one was being moderated by a health professional (n=12,058 members) and one was led by peers (n=5598 members). Utterances from 3 consecutive calendar years will be studied by comparing the months from before the pandemic, the months during the height of the pandemic, and the months during the postpandemic phase (2019-2021). This method permits the identification of different types of misinformation and the context in which they emerge. Ethical approval was obtained by the Università della Svizzera italiana ethics committee.
Results
The usability of the codebook was demonstrated with a pretest. Subsequently, 144 threads (1534 utterances) were coded by the two coders. Intercoder reliability was calculated on 293 units (19.10% of the total sample; Krippendorff α=.94, range .72-1). Aside from a few analyses comparing bundles, individual utterances will constitute the unit of analysis in most cases.
Conclusions
This content analysis will identify deleterious content found in online mental health support groups, the potential role of moderators in uncovering misleading information, and the impact of COVID-19 on the content.
International Registered Report Identifier (IRRID)
PRR1-10.2196/35347
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献