Abstract
AbstractBackgroundGenerative artificial intelligence (GAI) tools, such as large language models, have the potential to revolutionize medical research and writing, but their use also raises important ethical and practical concerns. This study examines the prevalence and content of GAI guidelines among Korean medical journals to assess the current landscape and inform future policy development.MethodsTop 100 Korean medical journals by H-index were surveyed. Author guidelines were collected and screened by a human author and AI chatbot to identify GAI-related content. Key components of GAI policies were extracted and compared across journals. Journal characteristics associated with GAI guideline adoption were also analyzed.ResultsOnly 18% of the surveyed journals had GAI guidelines, which is much lower than previously reported international journals. However, adoption rates increased over time, reaching 57.1% in the first quarter of 2024. Higher-impact journals were more likely to have GAI guidelines. All journals with GAI guidelines required authors to declare GAI use, and 94.4% prohibited AI authorship. Key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).ConclusionWhile GAI guideline adoption among Korean medical journals is lower than global trends, there is a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure responsible and ethical use of GAI in medical research and writing.Abstract Figure
Publisher
Cold Spring Harbor Laboratory