Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations
Author:
Beal Josh1,
Wu Hao-Yu1,
Park Dong Huk1,
Zhai Andrew1,
Kislyuk Dmitry1
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Analyzing I/O Performance of a Hierarchical HPC Storage System for Distributed Deep Learning;Parallel and Distributed Computing, Applications and Technologies;2023
2. MultiBiSage;Proceedings of the VLDB Endowment;2022-12
3. ItemSage: Learning Product Embeddings for Shopping Recommendations at Pinterest;Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining;2022-08-14