BACKGROUND
ChatGPT may have the potential to provide detailed information in the field of health and even dry eye
OBJECTIVE
This study was to evaluate the text's quality, readability, and comprehensibility of the content generated by the ChatGPT about the most frequently searched queries on Google about dry eye disease (DED).
METHODS
The research employed Google Trends to discover the most commonly searched terms associated with DED. These identified keywords were then entered into ChatGPT, and the generated responses were evaluated for quality using the Ensuring Quality Information for Patients tool (EQIP). The readability of the content was measured using both Flesch-Kincaid Grade Level (FKGL) and Flesch-Kincaid Reading Ease (FKRE) parameters
RESULTS
The most commonly searched phrases were "eye drops," "dry eyes," and "dry eye drops." The countries that showed the greatest interest in these topics were the United States of America, Ireland, and the United Kingdom. The statistical analysis uncovered substantial concerns regarding the readability and comprehension of ChatGPT's written content about DED, indicating a necessity for enhancement. The low average EQIP value indicated the need to improve the quality and reliability of the content generated by ChatGPT.
CONCLUSIONS
The results of this indicated that the readability of ChatGPT's content on DED surpassed predefined standards but also highlighted concerns about its quality. Enhancing quality could be achieved by retraining the virtual intelligence with credible sources and verifying information through expert review can enhance the quality of the content.