Abstract
Background: Childhood cancer incidence rises by 1.1% annually, with leukemia up 0.6% and soft-tissue sarcomas 1.8%. This trend challenges pediatric oncology and increases demand for accurate online medical information. This study examined ChatGPT's accuracy and reliability in answering questions about childhood tumors and its ability to provide emotional support.
Methods: This study screened 150 questions from authoritative sources to assess ChatGPT's effectiveness in providing accurate information on childhood cancer. A double-blind evaluation and a four-level scoring system by pediatric oncologists were implemented. We also evaluated ChatGPT's ability to provide emotional support by tailoring ten questions to the users' specific needs.
Result: ChatGPT demonstrated high precision, accurately answering 132 (88%) of 150 questions across various domains: basic knowledge (28%), diagnosis (26.7%), treatment (32%), and prevention (13.3%). It provided 13 (8.7%) correct but incomplete responses and 5 (3.3%) partially correct responses, with no completely incorrect answers. Reproducibility was high at 98%. When evaluated on ten questions about humanistic care and emotional support for children with cancer, ChatGPT received a "B" grade in empathy and an "A" in effective communication. For emotional support, it scored "B" on eight occasions and "C" on two.
Conclusion: Our findings suggest that ChatGPT's accuracy and repeatability could enable it to offer virtual doctor consultations. However, its emotional support capacity needs improvement. As ChatGPT evolves, it may assume roles traditionally held by physicians. Further research is necessary to assess the risks and efficacy of ChatGPT in pediatric oncology and other medical fields to enhance patient outcomes.