Affiliation:
1. Northwestern University Feinberg School of Medicine, USA
2. National Institute of Environmental Health Sciences, USA
3. National Institutes of Health, USA
Abstract
Journals and publishers are increasingly using artificial intelligence (AI) to screen submissions for potential misconduct, including plagiarism and data or image manipulation. While using AI can enhance the integrity of published manuscripts, it can also increase the risk of false/unsubstantiated allegations. Ambiguities related to journals’ and publishers’ responsibilities concerning fairness and transparency also raise ethical concerns. In this Topic Piece, we offer the following guidance: (1) All cases of suspected misconduct identified by AI tools should be carefully reviewed by humans to verify accuracy and ensure accountability; (2) Journals/publishers that use AI tools to detect misconduct should use only well-tested and reliable tools, remain vigilant concerning forms of misconduct that cannot be detected by these tools, and stay abreast of advancements in technology; (3) Journals/publishers should inform authors about irregularities identified by AI tools and give them a chance to respond before forwarding allegations to their institutions in accordance with Committee on Publication Ethics guidelines; (4) Journals/publishers that use AI tools to detect misconduct should screen all relevant submissions and not just random/purposefully selected submissions; and (5) Journals should inform authors about their definition of misconduct, their use of AI tools to detect misconduct, and their policies and procedures for responding to suspected cases of misconduct.
Funder
National Center for Advancing Translational Sciences
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献