BACKGROUND
Artificial intelligence (AI) holds tremendous potential for healthcare, as has been demonstrated across various use cases ranging from automated triage to assisted diagnosis. However, the limitations of AI must also be carefully considered in a fact-based debate on optimal use scenarios. In light of the prominent discussion around trust issues with AI, it is important to assess how and what physicians think about the topic in order to avoid general resistance to technology among medical practitioners.
OBJECTIVE
The aim of the present study was to identify key themes in medical professionals’ discussions of AI and to examine these themes for existent perceptions of AI.
METHODS
Using a computational grounded theory approach, 181 Reddit threads in the medical subreddits of r/medicine, r/radiology, r/surgery, and r/psychiatry were analyzed in order to identify key themes. We combined a quantitative, unsupervised machine learning approach for detecting thematic clusters with a qualitative data analysis for gaining a deeper understanding of the perceptions that medical professionals have of AI.
RESULTS
Three relevant key themes – (1) the perceived consequences of AI, (2) perceptions of the physician–AI relationship, and (3) a proposed way forward – emerged from the Reddit analysis. The first and second themes, in particular, were found in posts that appeared to be partially biased toward physicians’ fear of being replaced, toward the physicians’ skepticism of AI, and toward the physicians’ fear that patients may not accept AI. The third theme, however, involves a way forward and consists of factual discussions about how AI and medicine have to develop further in order to lead to the broad adoption of AI as well as to fruitful outcomes for healthcare.
CONCLUSIONS
Many physicians aim to yield the greatest value from AI for their patients and thus engage in constructive criticism of the technology. At the same time, a concerningly large number of physicians demonstrate perceptions that appear to be at least partially biased and that could hinder both successful use-case implementation and societal acceptance of AI in the future. Therefore, such biased perceptions need to be monitored and – where possible – countered.