BACKGROUND
ChatGPT is becoming a new reality. Where do we go from here?
OBJECTIVE
is to show how we can distinguish ChatGPT-generated publications from counterparts produced by biomedical scientists.
METHODS
By means of a new algorithm, called xFakeBibs, we show the significant difference between ChatGPT-generated fake publications and real publications. Specifically, we triggered ChatGPT to generate 100 publications that were related to Alzheimer’s disease and comorbidity. Using the TF-IDF measure against a dataset of real publications, we constructed a network training model of the bigrams extracted from 100 publications. By 10-folds of 100 publications each, we built 10 calibrating networks to derive lower/upper bounds for classifying an article as real or fake. The final step of the algorithm is designed to test xFakeBibs against each of the ChatGPT-generated articles and predict its class. The xFakeBibs algorithm successfully assigned the POSITIVE label for real and NEGATIVE for fake ones.
RESULTS
When comparing the training model with the calibration models, we found that the similarities fluctuated between (19%-21%) of bigram overlaps. The calibrating folds contributed (51%-70%) of new bigrams, while ChatGPT contributed only 23% (> 50% of any of the other 10 calibrating folds). When classifying the individual articles, the xFakeBibs algorithm predicted 98/100 publications as fake, while 2 articles failed the test and were classified as real publications.
CONCLUSIONS
This work provided clear evidence on how to distinguish ChatGPT-generated articles from real articles. The analysis demonstrated how such contents are distinguishable in bulk. Also, the algorithmic approach demonstrated the detection the individual fake articles with a high degree of accuracy. However, it remains challenging to detect all fake records. ChatGPT may seem to be a useful tool, but it certainly presents a threat to our authentic knowledge and real science. This work is indeed a step in the right direction to counter fake science and misinformation.
CLINICALTRIAL
N/A