Abstract
AbstractEarly diagnosis of Alzheimer’s disease (AD) is an important task that facilitates the development of treatment and prevention strategies and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET which measures the accumulation of amyloid plaques in the brain – a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner and they are inevitably biased toward the given label information. To this end, we propose a self-supervised contrastive learning method to predict AD progression with 3D amyloid-PET. It uses unlabeled data to capture general representations underlying the images. As the downstream task is given as classification, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, we also propose a loss function to utilize the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification.
Publisher
Cold Spring Harbor Laboratory
Reference30 articles.
1. 2022 Alzheimer’s disease facts and figures;Alzheimer’s Association;Alzheimer’s & Dementia,2022
2. SPM12 manual;Wellcome Trust Centre for Neuroimaging, London, UK,2014
3. Caron, M. , Misra, I. , Mairal, J. , Goyal, P. , Bojanowski, P. , & Joulin, A. (2020). Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 2020-December.
4. Caron, M. , Touvron, H. , Misra, I. , Jegou, H. , Mairal, J. , Bojanowski, P. , & Joulin, A. (2021). Emerging Properties in Self-Supervised Vision Transformers. Proceedings of the IEEE International Conference on Computer Vision. https://doi.org/10.1109/ICCV48922.2021.00951
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献