Abstract
AbstractAge is an important risk factor among critically ill children with neonates being the most vulnerable. Clinical prediction models need to account for age differences and must be externally validated and updated, if necessary, to enhance reliability, reproducibility, and generalizability. We externally validated the Smart Triage model using a combined prospective baseline cohort from three hospitals in Uganda and two in Kenya using admission, mortality, and readmission. We evaluated model discrimination using area under the receiver-operator curve (AUROC) and visualized calibration plots. In addition, we performed subsetting analysis based on age groups (< 30 days, ≤ 2 months, ≤ 6 months, and < 5 years). We revised the model for neonates (< 1 month) by re-estimating the intercept and coefficients and selected new thresholds to maximize sensitivity and specificity. 11595 participants under the age of five (under-5) were included in the analysis. The proportion with an outcome ranged from 8.9% in all children under-5 (including neonates) to 26% in the neonatal subset alone. The model achieved good discrimination for children under-5 with AUROC of 0.81 (95% CI: 0.79-0.82) but poor discrimination for neonates with AUROC of 0.62 (95% CI: 0.55-0.70). Sensitivity at the low-risk thresholds (CI) were 0.85 (0.83-0.87) and 0.68 (0.58-0.76) for children under-5 and neonates, respectively. Specificity at the high-risk thresholds were 0.93 (0.93-0.94) and 0.96 (0.94-0.98) for children under-5 and neonates, respectively. After model revision for neonates, we achieved an AUROC of 0.83 (0.79-0.87) with 13% and 41% as the low- and high-risk thresholds, respectively. The Smart Triage model showed good discrimination for children under-5. However, a revised model is recommended for neonates due to their uniqueness in disease susceptibly, host response, and underlying physiological reserve. External validation of the neonatal model and additional external validation of the under-5 model in different contexts is required.Author summaryClinical prediction model has become evermore popular in various medical fields as it can improve clinical decision-making by providing personalized risk estimate for patients. It is a statistical technique that incorporates patient-specific factors to personalize treatment and optimize health resources allocation. Clinical prediction models need to be validated in a different setting and population, and updated accordingly to ensure accuracy and relevance in clinical settings. We aim to evaluate one such model currently being implemented at the outpatient pediatric department at multiple hospitals in Uganda and Kenya. This model has been incorporated into a digital platform that is used to quickly identify critically ill children at triage. After validating the model against different age groups, we found the current model is not well suited for neonates and thus attempted to update the model. Our study provides new insight into clinical variables that impact neonatal outcome and we hope to improve neonatal morality for low-resource settings.
Publisher
Cold Spring Harbor Laboratory