1. ETC: Encoding Long and Structured Inputs in Transformers
2. Amir Amel-Zadeh and Jonathan Faasse. 2016. The information content of 10-K narratives: comparing MD&A and footnotes disclosures. Retrieved Nov. 2 2019 from 10.2139/ssrn.2807546
3. Dogu Araci. 2019. FinBERT: Financial Sentiment Analysis with Pre-trained Language Models. Master’s thesis. University of Amsterdam.
4. Iz Beltagy Matthew E. Peters and Arman Cohan. 2020. Longformer: The long-document transformer. Retrieved from https://arXiv:2004.05150
5. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, 1877–1901. Retrieved from https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf