1. COSMOS on Steroids
2. Shivangi Aneja , Chris Bregler , and Matthias Nießner . 2021 a. COSMOS: Catching out-of-context misinformation with self-supervised learning. arXiv preprint arXiv:2101.06278 (2021). Shivangi Aneja, Chris Bregler, and Matthias Nießner. 2021a. COSMOS: Catching out-of-context misinformation with self-supervised learning. arXiv preprint arXiv:2101.06278 (2021).
3. Shivangi Aneja , Cise Midoglu , Duc-Tien Dang-Nguyen , Michael Alexander Riegler , Paal Halvorsen, Matthias Nießner, Balu Adsumilli, and Chris Bregler. 2021 b. MMSys' 21 grand challenge on detecting cheapfakes. arXiv preprint arXiv:2107.05297 (2021). Shivangi Aneja, Cise Midoglu, Duc-Tien Dang-Nguyen, Michael Alexander Riegler, Paal Halvorsen, Matthias Nießner, Balu Adsumilli, and Chris Bregler. 2021b. MMSys' 21 grand challenge on detecting cheapfakes. arXiv preprint arXiv:2107.05297 (2021).
4. Junyoung Chung , Caglar Gulcehre , KyungHyun Cho , and Yoshua Bengio . 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 ( 2014 ). Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014).
5. Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).