Author:
Hamed Ahmed Abdeen,Wu Xindong
Abstract
AbstractGenerative AI tools exemplified by ChatGPT are becoming a new reality. This study is motivated by the premise that “AI generated content may exhibit a distinctive behavior that can be separated from scientific articles”. In this study, we show how articles can be generated using means of prompt engineering for various diseases and conditions. We then show how we tested this premise in two phases and prove its validity. Subsequently, we introduce xFakeSci, a novel learning algorithm, that is capable of distinguishing ChatGPT-generated articles from publications produced by scientists. The algorithm is trained using network models driven from both sources. To mitigate overfitting issues, we incorporated a calibration step that is built upon data-driven heuristics, including proximity and ratios. Specifically, from a total of a 3952 fake articles for three different medical conditions, the algorithm was trained using only 100 articles, but calibrated using folds of 100 articles. As for the classification step, it was performed using 300 articles per condition. The actual label steps took place against an equal mix of 50 generated articles and 50 authentic PubMed abstracts. The testing also spanned publication periods from 2010 to 2024 and encompassed research on three distinct diseases: cancer, depression, and Alzheimer’s. Further, we evaluated the accuracy of the xFakeSci algorithm against some of the classical data mining algorithms (e.g., Support Vector Machines, Regression, and Naive Bayes). The xFakeSci algorithm achieved F1 scores ranging from 80 to 94%, outperforming common data mining algorithms, which scored F1 values between 38 and 52%. We attribute the noticeable difference to the introduction of calibration and a proximity distance heuristic, which underscores this promising performance. Indeed, the prediction of fake science generated by ChatGPT presents a considerable challenge. Nonetheless, the introduction of the xFakeSci algorithm is a significant step on the way to combating fake science.
Funder
European Union’s Horizon 2020 research and innovation programme
Ministerstwo Edukacji i Nauki
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Chatgpt. Online: https://chat.openai.com (2023). Accessed 15 Aug 2023.
2. Synnestvedt, M. B., Chen, C. & Holmes, J. H. Citespace ii: visualization and knowledge discovery in bibliographic databases. In AMIA annual symposium proceedings, vol. 2005, 724 (American Medical Informatics Association, 2005).
3. Holzinger, A. et al. On graph entropy measures for knowledge discovery from publication network data. In Availability, Reliability, and Security in Information Systems and HCI: IFIP WG 8.4, 8.9, TC 5 International Cross-Domain Conference, CD-ARES 2013, Regensburg, Germany, September 2-6, 2013. Proceedings 8, 354–362 (Springer, 2013).
4. Usai, A., Pironti, M., Mital, M. & Aouina Mejri, C. Knowledge discovery out of text data: a systematic review via text mining. J. Knowl. Manag. 22, 1471–1488 (2018).
5. Thaler, A. D. & Shiffman, D. Fish tales: Combating fake science in popular media. Ocean Coastal Manag. 115, 88–91 (2015).