BACKGROUND
The question of whether some health information is true or false has gained new momentum since the advent of the Internet. Undoubtedly, more health information is available since it can be retrieved quickly and without much expense. As parties with vested interests may post information filtered through the lenses of their interests, as persons without formal qualifications are allowed to post, and as the identification of a trustworthy source of data is beyond many users, it is very likely that there is misinformation on the web, and that the misinformation in total has increased.
OBJECTIVE
This study aims to contribute to the knowledge of when and how (and maybe why) there is an input of erroneous information to the mental health discussions in online communities for mental health (OCMH). As the discussions (including original posts, comments, comments to these comments, etc.) have been publicly available (to those participating online groups) for some time, the misinformation remains on the site. During the time the information posted is available, it can spread. This study aims to identify contexts that ease the storing of misinformation or help to find it.
METHODS
We tested whether differentiation in socio-demographic and communication variables was able to form argumentative contexts (or niches) that made it more likely for interested persons to come across misinformation while looking for correct information. The opposite side of the differentiation would be qualities that impede the spread of erroneous information. The research was a content analysis of 1534 comments (144 first posts) about two Italian-speaking Facebook groups. Two indicators were computed: the prevalence of erroneous content and the failure to attend to errors.
RESULTS
The study found that about one-third (32.0%. 407/1534) contained medically inaccurate information or challenged medical expertise. However, only a fifth of those (20.6%, 84/407) having at least one misinforming statement were corrected or discussed. We identified data that could be understood as markers of erroneous information. That hope was not often fulfilled. In many cases, the differentiating variable did not differentiate much. However, some differences emerged. Competence-moderation of OCMHs appears to be a stronger impediment to erroneous information than empathy-moderated. Patients searching for psychic causes for physical symptoms behaved differently from others in many respects, as well as those that may support misinformation. Differences between communication variables and misinformation were also found in posts about suicide and anxiety disorders, while there seemed to be no association between depression and Covid-19. Treatment was the primary subject of illness trajectories, with some propensity to help with erroneous information.
CONCLUSIONS
A crucial question is: Are the error markers or misinformation niches identified indicative of the demands for more patient empowerment in online mental health communities? Demands are often invoked without a guarantee that an empowered person would be capable of taking over the part of a newly autonomous co-determinant of his health. Furthermore, the results evidenced the necessity of experts in these communities.
CLINICALTRIAL
International registered report identifier (irrid): PRR1-10.2196/35347