Abstract
ABSTRACTIntroductionFor artificial intelligence (AI) to help improve mental health care, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised communities to take an active role in constructing research agendas and outputs. Given the unmet needs of the LGBTQI+ community in mental health care, there is a pressing need for participatory research to include a range of diverse queer perspectives on issues of data collection and use (in routine clinical care as well as for research) as well as AI design. Here we propose a protocol for a Delphi consensus process for the development of PARticipatory Queer AI Research for Mental Health (PARQAIR-MH) practices, aimed at informing digital health practices and policy.Methods and AnalysisThe development of PARQAIR-MH is comprised of four stages; In Stage 1, a review of recent literature and fact-finding consultation with stakeholder organisations will be conducted to define a terms-of-reference for Stage 2, the Delphi process. Our Delphi process consists of three rounds, where the first two rounds will iterate and identify items to be included in the final Delphi survey for consensus ratings. Stage 3 consists of consensus meetings to review and aggregate the Delphi survey responses leading to Stage 4 where we will produce a reusable toolkit to facilitate participatory development of future bespoke LGBTQI+–adapted data collection, harmonisation and use for data-driven AI applications specifically in mental health care settings.Ethics and DisseminationThe PARQAIR-MH aims to deliver a toolkit that will help to ensure that the specific needs of LGBTQI+ communities are accounted for in mental health applications of data-driven technologies. Participants in the Delphi process will be recruited by snowball and opportunistic sampling via professional networks and social media (but not by direct approach to healthcare service users, patients, specific clinical services or via clinicians’ caseloads). Participants will not be required to share personal narratives and experiences of healthcare or treatment for any condition. Before agreeing to participate, people will be given information about the issues considered to be in-scope for the Delphi (e.g. developing best practices and methods for collecting and harmonizing sensitive characteristics data; developing guidelines for data use/re-use) alongside specific risks of unintended harm from participating that can be reasonably anticipated. Outputs from Stage 4 will be made available in open access peer-reviewed publications, blogs, social media and on a dedicated project website for future re-use.Ethical ApprovalThe Institute of Population Health Research Ethics Committee of the University of Liverpool gave ethical approval for this work (REC Reference: 12413; 24th July 2023)Strengths and LimitationsThe proposed Delphi study will deliver a toolkit that assists researchers, health care organisations and policy makers decide on how to appropriately collect and use data on sensitive characteristics (e.g. sexual orientation and gender identity) including stakeholder-defined re-use of this data for specific purposes including health service improvement and developing tools for data-driven decision support (i.e. in data science, AI and ML applications designed for LGBTQI+ communities).This Delphi study is designed to focus on the intersection of sensitive characteristics and mental health, where similar research has focused on healthcare or sexual health more generally1.The Delphi study will be led by a team from the United Kingdom, with the expectation the consensus process will involve participants largely drawn from Western cultures with similar societal attitudes and legislative mechanisms to protect the human rights of LGBTQI+ people. This will limit the transportability and generalisability of the Delphi process and consensus outputs.
Publisher
Cold Spring Harbor Laboratory
Reference54 articles.
1. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review;The Lancet Digit. Heal,2021
2. Foley, J. & Woollard, J. Digital Future of Mental Healthcare Report. https://topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-Mental-health-paper.pdf (2019).
3. Chen, R. J. et al. Algorithm fairness in AI for medicine and healthcare. arXiv preprint arXiv:2110.00603 DOI: https://doi.org/10.48550/arXiv.2110.00603 (2021).
4. Dissecting racial bias in an algorithm used to manage the health of populations
5. Race-Free Equations for eGFR: Comparing Effects on CKD Classification