Abstract
Background
Researchers have developed machine learning-based ECG diagnostic algorithms that match or even surpass cardiologist level of performance. However, most of them cannot be used in real-world, as older generation ECG machines do not permit installation of new algorithms.
Objective
To develop a smartphone application that automatically extract ECG waveforms from photos and to convert them to voltage-time series for downstream analysis by a variety of diagnostic algorithms built by researchers.
Methods
A novel approach of using objective detection and image segmentation models to automatically extract ECG waveforms from photos taken by clinicians was devised. Modular machine learning models were developed to sequentially perform waveform identification, gridline removal, and scale calibration. The extracted data were then analysed using a machine learning-based cardiac rhythm classifier.
Results
Waveforms from 40 516 scanned and 444 photographed ECGs were automatically extracted. 12 828 of 13 258 (96.8%) scanned and 5399 of 5743 (94.0%) photographed waveforms were correctly cropped and labelled. 11 604 of 12 735 (91.1%) scanned and 5062 of 5752 (88.0%) photographed waveforms achieved successful voltage-time signal extraction after automatic gridline and background noise removal. In a proof-of-concept demonstration, an atrial fibrillation diagnostic algorithm achieved 91.3% sensitivity, 94.2% specificity, 95.6% positive predictive value, 88.6% negative predictive value and 93.4% F1 score, using photos of ECGs as input.
Conclusion
Object detection and image segmentation models allow automatic extraction of ECG signals from photos for downstream diagnostics. This novel pipeline circumvents the need for costly ECG hardware upgrades, thereby paving the way for large-scale implementation of machine learning-based diagnostic algorithms.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献