Funder
National Science Foundation
Reference72 articles.
1. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text;Akbari,2021
2. The curious case of benign memorization;Anagnostidis,2022
3. Masked Siamese Networks for Label-Efficient Learning
4. Big Self-Supervised Models Advance Medical Image Classification
5. Beit: Bert pre-training of image transformers;Bao,2022