1. Josh Beal , Hao-Yu Wu , Dong Huk Park , Andrew Zhai, and Dmitry Kislyuk. 2022 . Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations. In WACV. Josh Beal, Hao-Yu Wu, Dong Huk Park, Andrew Zhai, and Dmitry Kislyuk. 2022. Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations. In WACV.
2. GrokNet
3. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell etal 2020. Language models are few-shot learners. In NeurIPS. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. In NeurIPS.
4. Mathilde Caron Ishan Misra Julien Mairal Priya Goyal Piotr Bojanowski and Armand Joulin. 2020. Unsupervised learning of visual features by contrasting cluster assignments. In NeurIPS. Mathilde Caron Ishan Misra Julien Mairal Priya Goyal Piotr Bojanowski and Armand Joulin. 2020. Unsupervised learning of visual features by contrasting cluster assignments. In NeurIPS.
5. Ting Chen Simon Kornblith Mohammad Norouzi and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In ICML. Ting Chen Simon Kornblith Mohammad Norouzi and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In ICML.