BACKGROUND
Machine learning models have advanced medical image processing and can yield faster, more accurate diagnoses. Despite a wealth of available medical imaging data, high-quality labeled data for model training is lacking.
OBJECTIVE
We investigated whether a gamified crowdsourcing platform enhanced with inbuilt quality control metrics can produce lung ultrasound clip labels comparable to those from clinical experts.
METHODS
In total, 2,384 lung ultrasound clips were retrospectively collected from 203 patients. Six lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create two sets of reference standard labels (195 training set clips and 198 test set clips). Sets were respectively used to A) train users on a gamified crowdsourcing platform, and B) compare concordance of the resulting crowd labels to the concordance of individual experts to reference standards.
RESULTS
99,238 crowdsourced opinions on 2,384 lung ultrasound clips were collected from 426 unique users over 8 days. On the 198 test set clips, mean labeling concordance of individual experts relative to the reference standard was 85.0% ± 2.0 (SEM), compared to 87.9% crowdsourced label concordance (p=0.15). When individual experts’ opinions were compared to reference standard labels created by majority vote excluding their own opinion, crowd concordance was higher than the mean concordance of individual experts to reference standards (87.4% vs. 80.8% ± 1.6; p<0.001).
CONCLUSIONS
Crowdsourced labels for B-line classification via a gamified approach achieved expert-level quality. Scalable, high-quality labeling approaches may facilitate training dataset creation for machine learning model development.