1. Social LSTM: Human Trajectory Prediction in Crowded Spaces
2. Alexei Baevski Wei-Ning Hsu Qiantong Xu Arun Babu Jiatao Gu and Michael Auli. 2022. Data2vec: A general framework for self-supervised learning in speech vision and language. In ICML. PMLR 1298--1312. Alexei Baevski Wei-Ning Hsu Qiantong Xu Arun Babu Jiatao Gu and Michael Auli. 2022. Data2vec: A general framework for self-supervised learning in speech vision and language. In ICML. PMLR 1298--1312.
3. Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill etal 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
4. Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey Hinton . 2020 . A simple framework for contrastive learning of visual representations . In International conference on machine learning. PMLR, 1597--1607 . Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597--1607.
5. Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019, Minneapolis, MN , USA, June 2--7, 2019, Volume 1 . Association for Computational Linguistics , 4171--4186. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019, Minneapolis, MN, USA, June 2--7, 2019, Volume 1. Association for Computational Linguistics, 4171--4186.