The ability of Chat-GPT to paraphrase texts and reduce plagiarism (Preprint)
Author:
Amini-Salehi Ehsan,Hassanipour Soheil,Bozorgi Ali,Keivanlou Mohammad-Hossein,Dave Tirth,Alotaibi Abdulhadi,Mellatdoust Parinaz,Joukar Farahnaz,Bakhshi Arash
Abstract
BACKGROUND
Background: The introduction of Chat-GPT by OpenAI has garnered significant attention. Among its capabilities, paraphrasing stands out; however, our study aimed to investigate the satisfactory levels of plagiarism in the paraphrased text produced by this chatbot.
Methods: Three texts of varying lengths were presented to Chat-GPT. Chat-GPT was then instructed to paraphrase the provided text using five different prompts. In the subsequent stage of the study, the text was divided into separate paragraphs, and Chat-GPT was requested to paraphrase each paragraph individually. Lastly, in the third stage, Chat-GPT was asked to paraphrase the texts it had previously generated.
Results: The average plagiarism rate in the texts generated by Chat-GPT was 45%. Chat-GPT exhibited a substantial reduction in text plagiarism for the provided texts (MD= -0.51, 95%CI: -0.54, -0.48, P<0.001). Furthermore, when comparing the second attempt with the initial attempt, a significant decrease in plagiarism rate was observed (MD= -0.06, 95%CI: -0.08, -0.03, P<0.001). The number of paragraphs in the texts demonstrated a noteworthy association with the percentage of plagiarism, with texts consisting of a single paragraph exhibiting the lowest plagiarism rate (P <0.001).
Conclusion: Although Chat-GPT demonstrates a notable reduction of plagiarism within texts, the existing levels of plagiarism remain relatively high.
OBJECTIVE
Due to the increasing popularity of ChatGPT in medical research, several studies are needed to identify its pros and cons. In this study, we aim to assess ChatGPT's real ability to paraphrase and reduce plagiarism by imputing different texts and prompts and assessing the plagiarism rate of the rephrased texts provided.
METHODS
Materials and methods
Selection of Texts
To assess the plagiarism rates and the rephrasing capabilities of ChatGPT (version 3.5), three texts were selected for the study. These texts varied in length to provide a comprehensive evaluation of the model's performance. Text 1 consisted of 319 words, text 2 comprised 613 words, and text 3 encompassed 1148 words. The texts were selected from one of our previous published papers [13].
Instructions given to ChatGPT
For each selected text, five distinct prompts were given to ChatGPT to rephrase the texts. These instructions were designed to test different aspects of rephrasing and reducing plagiarism. The prompts were as follows:
"Paraphrase the text"
"Rephrase the text"
"Reduce the plagiarism of the text"
"Rephrase it in a way that conveys the same meaning using different words and sentence structure"
"Reword this text using different language"
Subdivision of Texts
To further evaluate the effectiveness of ChatGPT in rephrasing and reducing plagiarism, the original texts were subdivided into multiple paragraphs. Specifically, texts one, two, and three were provided to ChatGPT in one and three paragraphs, one, three, and five paragraphs, three, five, and seven paragraphs, respectively. All the texts with different paragraph numbers were subjected to the same five rephrasing orders. This approach allowed for a comparison of the paraphrased texts with different paragraph sections within the same content.
Second try of paraphrasing
To assess the influence of multiple rephrasing iterations, the texts generated by ChatGPT were once again incorporated into the system in the same sequence as before. Subsequently, the plagiarism rates of the texts were analyzed using the Ithenticate platform, a tool commonly employed for such evaluations in academic settings. This process enabled the measurement and comparison of potential similarities between the original texts and their rephrased counterparts, shedding light on the extent of originality achieved through the rephrasing iterations.
Data Analysis
The data analysis for this study was conducted using SPSS version 19. To compare the plagiarism rates of the texts, a paired t-test analysis was utilized. This statistical test allowed us to examine whether there were significant differences in plagiarism rates between the original texts and the paraphrased texts generated by Chat-GPT. Additionally, to assess the potential impact of different prompts on plagiarism rates, a one-way ANOVA was employed. This analysis aimed to determine if there were statistically significant differences in plagiarism rates across the various prompts given to Chat-GPT. A significance level of p < 0.05 was adopted to determine statistical significance.
RESULTS
A total number of 90 texts were provided by Chat-GPT. General information on plagiarism rates is provided in Table 1. The mean plagiarism rate of texts was 0.45±0.10. the mean plagiarism rate for the first try and second try were 0.48±0.09 and 0.42±0.09 respectively.
The potency of ChatGPT in reducing plagiarism:
Based on the results of our study, Chat-GPT demonstrated an ability to significantly reduce plagiarism in texts right from the first attempt (mean difference [MD] = -0.51, 95% confidence interval [CI]: -0.54, -0.48, p < 0.001). Moreover, our research revealed that even further improvements were achieved with the second attempt, as it yielded a significantly lower plagiarism rate compared to the initial try (MD = -0.06, 95% CI: -0.08, -0.03, p < 0.001).
The results also showed a correlation between the number of paragraphs within a text and the plagiarism rate. Our findings indicate that texts comprised of a single paragraph exhibited the lowest plagiarism rates, and this relationship was statistically significant (p < 0.001). However, when analyzing the five different orders of the texts, we found no significant difference in terms of their plagiarism rates (p = 0.187).
Furthermore, our study did not identify any statistically significant distinctions among the plagiarism rates of text 1, text 2, and text 3 (p = 0.556), suggesting that Chat-GPT's effectiveness remained consistent across these particular texts.
Correlation between texts lengths and plagiarism rates
We assessed the correlation between the word count of the texts provided by Chat-GPT and their plagiarism rates. Although longer texts appeared to have higher plagiarism rates, the correlation was not significant (r=0.2, P=0.059) (Figure1).
CONCLUSIONS
While Chat-GPT has shown to significantly reduce plagiarism in texts, it is important to note that the resulting plagiarism rates of the provided texts may still be considered high, which may not meet the acceptance criteria of most scientific journals. Therefore, medical writers and professionals should carefully consider this issue when utilizing Chat-GPT for paraphrasing their texts. There are a couple of strategies authors can employ to improve the paraphrasing efficacy of Chat-GPT. Presenting the texts in a single-paragraph format and repeating the requesting procedure with Chat-GPT. By considering these strategies and being mindful of the potential limitations, authors can strive to improve the paraphrasing efficacy of Chat-GPT and address the challenge of high plagiarism rates associated with its outputs.
Publisher
JMIR Publications Inc.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|