Abstract
Background
Throughout the COVID-19 pandemic, there has been a concern that social media may contribute to vaccine hesitancy due to the wide availability of antivaccine content on social media platforms. YouTube has stated its commitment to removing content that contains misinformation on vaccination. Nevertheless, such claims are difficult to audit. There is a need for more empirical research to evaluate the actual prevalence of antivaccine sentiment on the internet.
Objective
This study examines recommendations made by YouTube’s algorithms in order to investigate whether the platform may facilitate the spread of antivaccine sentiment on the internet. We assess the prevalence of antivaccine sentiment in recommended videos and evaluate how real-world users’ experiences are different from the personalized recommendations obtained by using synthetic data collection methods, which are often used to study YouTube’s recommendation systems.
Methods
We trace trajectories from a credible seed video posted by the World Health Organization to antivaccine videos, following only video links suggested by YouTube’s recommendation system. First, we gamify the process by asking real-world participants to intentionally find an antivaccine video with as few clicks as possible. Having collected crowdsourced trajectory data from respondents from (1) the World Health Organization and United Nations system (nWHO/UN=33) and (2) Amazon Mechanical Turk (nAMT=80), we next compare the recommendations seen by these users to recommended videos that are obtained from (3) the YouTube application programming interface’s RelatedToVideoID parameter (nRTV=40) and (4) from clean browsers without any identifying cookies (nCB=40), which serve as reference points. We develop machine learning methods to classify antivaccine content at scale, enabling us to automatically evaluate 27,074 video recommendations made by YouTube.
Results
We found no evidence that YouTube promotes antivaccine content; the average share of antivaccine videos remained well below 6% at all steps in users’ recommendation trajectories. However, the watch histories of users significantly affect video recommendations, suggesting that data from the application programming interface or from a clean browser do not offer an accurate picture of the recommendations that real users are seeing. Real users saw slightly more provaccine content as they advanced through their recommendation trajectories, whereas synthetic users were drawn toward irrelevant recommendations as they advanced. Rather than antivaccine content, videos recommended by YouTube are likely to contain health-related content that is not specifically related to vaccination. These videos are usually longer and contain more popular content.
Conclusions
Our findings suggest that the common perception that YouTube’s recommendation system acts as a “rabbit hole” may be inaccurate and that YouTube may instead be following a “blockbuster” strategy that attempts to engage users by promoting other content that has been reliably successful across the platform.
Reference48 articles.
1. Online misinformation and vaccine hesitancy
2. SalimSYouTube boasts 2 billion monthly active users, 250 million hours watched on TV screens every dayDigital Information20192023-08-18https://www.digitalinformationworld.com/2019/05/youtube-2-billion-monthly-viewers-250-million-hours-tv-screen-watch-time-hours.html
3. RooseKThe making of a YouTube radicalThe New York Times20192023-08-18https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html
4. Creator discovery handbook: suggested videos on the watch pageYouTube Help20152022-10-12https://web.archive.org/web/20150329041618/https://support.google.com/youtube/answer/6060859?hl=en&ref_topic=6046759
5. SolsmanJEYouTube's AI is the puppet master over most of what you watchCNET20182023-08-18https://www.cnet.com/tech/services-and-software/youtube-ces-2018-neal-mohan/