Abstract
AbstractWith an estimated 3 billion people globally lacking access to dermatological care, technological solutions leveraging artificial intelligence (AI) have been proposed to improve access1. Diagnostic AI algorithms, however, require high-quality datasets to allow development and testing, particularly those that enable evaluation of both unimodal and multimodal approaches. Currently, the majority of dermatology AI algorithms are built and tested on proprietary, siloed data, often from a single site and with only a single image type (i.e., clinical or dermoscopic). To address this, we developed and released the Melanoma Research Alliance Multimodal Image Dataset for AI-based Skin Cancer (MIDAS) dataset, the largest publicly available, prospectively-recruited, paired dermoscopic- and clinical image-based dataset of biopsy-proven and dermatopathology-labeled skin lesions. We explored model performance on real-world cases using four previously published state-of-the-art (SOTA) models and compared model-to-clinician diagnostic performance. We also assessed algorithm performance using clinical photography taken at different distances from the lesion to assess its influence across diagnostic categories.We prospectively enrolled 796 patients through an IRB-approved protocol with informed consent representing 1290 unique lesions and 3830 total images (including dermoscopic and clinical images taken at 15-cm and 30-cm distance). Images represented the diagnostic diversity of lesions seen in general dermatology, with malignant, benign, and inflammatory lesions that included melanocytic nevi (22%; n=234), invasive cutaneous melanomas (4%; n=46), and melanoma in situ (4%; n=47). When evaluating SOTA models using the MIDAS dataset, we observed performance reduction across all models compared to their previously published performance metrics, indicating challenges to generalizability of current SOTA algorithms. As a comparative baseline, the dermatologists performing biopsies were 79% accurate with their top-1 diagnosis at differentiating a malignant from benign lesion. For malignant lesions, algorithms performed better on images acquired at 15-cm compared to 30-cm distance while dermoscopic images yielded higher sensitivity compared to clinical images.Improving our understanding of the strengths and weaknesses of AI diagnostic algorithms is critical as these tools advance towards widespread clinical deployment. While many algorithms may report high performance metrics, caution should be taken due to the potential for overfitting to localized datasets. MIDAS’s robust, multimodal, and diverse dataset allows researchers to evaluate algorithms on our real-world images and better assess their generalizability.
Publisher
Cold Spring Harbor Laboratory