Author:
Bolya Daniel,Fu Cheng-Yang,Dai Xiaoliang,Zhang Peizhao,Hoffman Judy
Publisher
Springer Nature Switzerland
Reference38 articles.
1. Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. arXiv:2005.00928 [cs.LG] (2020)
2. Babiloni, F., Marras, I., Kokkinos, F., Deng, J., Chrysos, G., Zafeiriou, S.: Poly-NL: linear complexity non-local layers with 3rd order polynomials. In: ICCV (2021)
3. Chen, B., Wang, R., Ming, D., Feng, X.: ViT-P: rethinking data-efficient vision transformers from locality. arXiv:2203.02358 [cs.CV] (2022)
4. Chen, Y., et al.: Mobile-former: bridging mobilenet and transformer. In: CVPR (2022)
5. Choromanski, K., et al.: Rethinking attention with performers. arXiv:2009.14794 [cs.LG] (2020)
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献