Abstract
Dear Editors,
Amidst the rapid advancements in artificial intelligence tools, we have read the editorials published in your journal on the subject of “artificial intelligence and academic articles” with great interest [1, 2]. First and foremost, we would like to express our gratitude for fostering an essential platform for discourse on this current topic. Thank you for creating a significant environment for discussion.
The rapid advancements emerging in artificial intelligence tools undoubtedly promise significant contributions not only in various fields but also in the realm of science. However, just as in scientific progress, it is clear that the opportunities supporting science and enabling its advancement are also evolving. For instance, had we sent this letter to your journal thirty years ago, we might have needed to send it through postal mail. Alternatively, if our writing had been published in your journal twenty years ago, we could have read it in hard copy rather than in an online environment. Similarly, had we been practising medicine hundreds of years ago, we wouldn't have had the capability to do anything for patients that we can easily treat today with the aid of ultrasound guidance in the operating room.
It is highly likely that in the future, thanks to artificial intelligence tools, many tasks will become significantly more efficient and practical. From this perspective, we believe that incorporating artificial intelligence tools into the realm of science is a necessity. However, as you have also pointed out in your editorial articles [1, 2], we believe that the inclusion of artificial intelligence tools as authors in academic research is a significant topic of debate. Based on our current knowledge and perspective, we believe this situation may not be entirely appropriate.
We believe that one of the most crucial points of contention regarding the inclusion of artificial intelligence tools as authors in academic research is the concept of “accuracy”. Artificial intelligence provides us with information it finds on the internet. Whether these sources are genuinely obtained from reputable journals cannot be definitively determined. This poses a significant challenge in ensuring the accuracy of such contributions. This also suggests that articles written by artificial intelligence may not be sufficiently reliable. For instance, when we input “the lumbar transforaminal injection method” into ChatGPT, it provides a lot of information on the topic. However, when asked for references, it responds with, “The information I provide is based on a vast dataset of text from a wide range of sources available on the internet, including books, websites, research papers, and more.” Indeed, it can also retrieve information from virtual and/or fake accounts. In essence, as of now, artificial intelligence lacks a truth filter similar to that of a human. While artificial intelligence facilitates rapid access to information, the uncertainty arising from data unreliability raises doubts about the information it presents. Furthermore, we believe that artificial intelligence cannot share an equal level of responsibility with human authors for the information it provides. For these reasons, we are of the opinion that the responsibility for confirming the accuracy of information presented by AI applications lies entirely with the human authors, and we believe that artificial intelligence applications should not be listed as authors in articles.
Yours Sincerely,
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献