Abstract
The rapid evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) technologies has opened new horizons in public healthcare. However, maximizing their potential requires precise and effective integration, particularly in obtaining specific healthcare information. This study focuses on Dry Eye Disease (DED), simulating 5,747 patient complaints to establish an IoT-enabled, AI-driven DED-detection system. Utilizing OpenAI GPT-4.0 and ERNIE Bot-4.0 APIs, a specialized prompt mechanism is developed to evaluate the urgency of medical attention required. The primary goal is to enhance the accuracy and interpretability of AI responses in interactions between DED patients and AI systems. A BERT machine learning model is also implemented for text classification to differentiate urgent from non-urgent cases based on AI-generated responses. User satisfaction, measured through Service Experiences (SE) and Medical Quality (MQ), yields a composite satisfaction score derived from these assessments' average. A comparison between prompted and non-prompted queries reveals a significant accuracy increase from 80.1–99.6%. However, this improvement is accompanied by a notable rise in response time, indicating a potential trade-off between accuracy and user satisfaction. In-depth analysis shows a decrease in SE satisfaction (95.5 to 84.7) and a substantial increase in MQ satisfaction (73.4 to 96.7) with prompted queries. These results highlight the need to balance accuracy carefully, response time, and user satisfaction in developing and deploying IoT-integrated AI systems in medical applications. The study underscores the crucial role of prompt engineering in improving the quality of AI-based healthcare services with virtual assistants. Integrating IoT with GPT-based models in ophthalmic virtual assistant development presents a promising direction for enhancing healthcare delivery in eye care. Future research should focus on optimizing prompt structures, exploring dynamic prompting approaches, prioritizing user-centric evaluations, conducting real-time implementation studies, and considering hybrid model development to address identified strengths, weaknesses, opportunities, and threats.