1. Denk, T.I., Reisswig, C.: BERTgrid: contextualized embedding for 2D document representation and understanding. arXiv:1909.04948 [cs], September 2019
2. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs], May 2019
3. Gal, R., Ardazi, S., Shilkrot, R.: Cardinal graph convolution framework for document information extraction. In: Proceedings of the ACM Symposium on Document Engineering 2020, pp. 1–11. ACM, Virtual Event, CA, USA, September 2020. https://doi.org/10.1145/3395027.3419584. https://dl.acm.org/doi/10.1145/3395027.3419584
4. Gardner, M., Berant, J., Hajishirzi, H., Talmor, A., Min, S.: Question answering is a format; when is it useful? arXiv:1909.11291 [cs], September 2019
5. Garncarek, L., Powalski, R., Stanisławek, T., Topolski, B., Halama, P., Graliński, F.: LAMBERT: layout-aware language modeling using BERT for information extraction. arXiv:2002.08087 [cs], March 2020