1. Hassan Akbari et al. 2021. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video , Audio and Text. In NeurIPS 2021 , Vol. 34 . 24206--24221. Hassan Akbari et al. 2021. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. In NeurIPS 2021, Vol. 34. 24206--24221.
2. Adapting the Edinburgh geoparser for historical geo-referencing;Beatrice Alex;International Journal of Humanities and Arts Computing,2015
3. Kumar Ayush et al. 2021. Geography-aware self-supervised learning . In CVPR 2021 . 10181--10190. Kumar Ayush et al. 2021. Geography-aware self-supervised learning. In CVPR 2021. 10181--10190.
4. Rishi Bommasani et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Rishi Bommasani et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
5. Tom Brown et al. 2020. Language models are few-shot learners . NIPS 2020 33 (2020) , 1877--1901. Tom Brown et al. 2020. Language models are few-shot learners. NIPS 2020 33 (2020), 1877--1901.