Abstract
AbstractMachine learning algorithms have begun to enter clinical settings traditionally resistant to digitalisation, such as psychiatry. This raises questions around how algorithms will be incorporated in professionals’ practices, and with what implications for care provision. This paper addresses such questions by examining the pilot of an algorithm for the prediction of inpatient violence in two acute psychiatric clinics in the Netherlands. Violence is a prominent risk in acute psychiatry, and professional sensemaking, corrective measures (such as patient isolation and sedation), and quantification instruments (such as the Brøset Violence Checklist, henceforth BVC) have previously been developed to deal with it. We juxtapose the different ways in which psychiatric nurses, the BVC, and algorithmic scores navigate assessments of the potential of future inpatient violence. We find that nurses approach violence assessment with an attitude of doubt and precaution: they aim to understand warning signs and probe alternative explanations to them, so as not to punish patients when not necessary. Being in charge of quantitative capture, they incorporate this attitude of doubt in the BVC scores. Conversely, the algorithmic risk scores import a logic of pre-emption into the clinic: they attempt to flag targets before warning signs manifests and are noticed by nurses. Pre-emption translates into punitive attitudes towards patients, to which nurses refuse to subscribe. During the pilots, nurses solely engage with algorithmic scores by attempting to reinstate doubt in them. We argue that pre-emption can hardly be incorporated into professional decision-making without importing punitive attitudes. As such, algorithmic outputs targeting ethically laden instances of decision-making are a cause for academic and political concern.
Publisher
Springer Science and Business Media LLC