DeepBreath—automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries
-
Published:2023-06-02
Issue:1
Volume:6
Page:
-
ISSN:2398-6352
-
Container-title:npj Digital Medicine
-
language:en
-
Short-container-title:npj Digit. Med.
Author:
Heitmann Julien, Glangetas Alban, Doenz JonathanORCID, Dervaux Juliane, Shama Deeksha M., Garcia Daniel Hinjos, Benissa Mohamed Rida, Cantais AymericORCID, Perez AlexandreORCID, Müller DanielORCID, Chavdarova Tatjana, Ruchonnet-Metrailler IsabelleORCID, Siebert Johan N.ORCID, Lacroix Laurence, Jaggi Martin, Gervaix Alain, Hartley Mary-AnneORCID, Hugon Florence, Fassbind Derrick, Barro Makura, Bediang Georges, Hafidi N. E. L., Bouskraoui M., Ba Idrissa,
Abstract
AbstractThe interpretation of lung auscultation is highly subjective and relies on non-specific nomenclature. Computer-aided analysis has the potential to better standardize and automate evaluation. We used 35.9 hours of auscultation audio from 572 pediatric outpatients to develop DeepBreath : a deep learning model identifying the audible signatures of acute respiratory illness in children. It comprises a convolutional neural network followed by a logistic regression classifier, aggregating estimates on recordings from eight thoracic sites into a single prediction at the patient-level. Patients were either healthy controls (29%) or had one of three acute respiratory illnesses (71%) including pneumonia, wheezing disorders (bronchitis/asthma), and bronchiolitis). To ensure objective estimates on model generalisability, DeepBreath is trained on patients from two countries (Switzerland, Brazil), and results are reported on an internal 5-fold cross-validation as well as externally validated (extval) on three other countries (Senegal, Cameroon, Morocco). DeepBreath differentiated healthy and pathological breathing with an Area Under the Receiver-Operator Characteristic (AUROC) of 0.93 (standard deviation [SD] ± 0.01 on internal validation). Similarly promising results were obtained for pneumonia (AUROC 0.75 ± 0.10), wheezing disorders (AUROC 0.91 ± 0.03), and bronchiolitis (AUROC 0.94 ± 0.02). Extval AUROCs were 0.89, 0.74, 0.74 and 0.87 respectively. All either matched or were significant improvements on a clinical baseline model using age and respiratory rate. Temporal attention showed clear alignment between model prediction and independently annotated respiratory cycles, providing evidence that DeepBreath extracts physiologically meaningful representations. DeepBreath provides a framework for interpretable deep learning to identify the objective audio signatures of respiratory pathology.
Publisher
Springer Science and Business Media LLC
Subject
Health Information Management,Health Informatics,Computer Science Applications,Medicine (miscellaneous)
Reference32 articles.
1. Hafke-Dys, H., Bręborowicz, A., Kleka, P., Kociński, J. & Biniakowski, A. The accuracy of lung auscultation in the practice of physicians and medical students. PloS One 14, e0220606 (2019). 2. Sarkar, M., Madabhavi, I., Niranjan, N. & Dogra, M. Auscultation of the respiratory system. Ann. Thorac. Med. 10, 158 (2015). 3. Abdel-Hamid, O. et al. Convolutional neural networks for speech recognition. IEEE/ACM Trans Audio Speech, Language Process 22, 1533–1545 (2014). 4. Kong, Q. et al. PANNs: Large-scale pretrained audio neural networks for audio pattern recognition. IEEE/ACM Transac Audio Speech Language Processing 28, 2880–2894 (2020). 5. Hershey, S. et al. Cnn architectures for large-scale audio classification. In 2017 ieee international conference on acoustics, speech and signal processing (icassp), 131–135 (IEEE, 2017).
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|