1. Beyer, L., Hénaff, O.J., Kolesnikov, A., Zhai, X., van den Oord, A.: Are we done with ImageNet? arXiv abs/2006.07159 (2020)
2. Bommasani, R., et al.: On the opportunities and risks of foundation models. arXiv abs/2108.07258 (2021)
3. Borgeaud, S., et al.: Improving language models by retrieving from trillions of tokens. arXiv abs/2112.04426 (2021)
4. Buzzega, P., Boschini, M., Porrello, A., Calderara, S.: Rethinking experience replay: a bag of tricks for continual learning. In: ICPR (2021)
5. Caron, M., et al.: Emerging properties in self-supervised vision transformers. arXiv abs/2104.14294 (2021)