1. Learning transferable visual models from natural language supervision;Alec;In ICML,2021
2. Learning the best pooling strategy for visual semantic embedding;Changhu;In CVPR,2021
3. Scaling up visual and vision-language representation learning with noisy text supervision;Chao;In ICML,2021
4. BERT: pre-training of deep bidirectional transformers for language understanding[J];DEVLIN,2018
5. BERT:pre-training of deep bidirectional transformers for language understanding[EB/OL];Devlin,2020