Author:
Lam Laurent,Ratnamogan Pirashanth,Tang Joël,Vanhuffel William,Caspani Fabien
Publisher
Springer Nature Switzerland
Reference27 articles.
1. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (2019). http://aclanthology.org/N19-1423.pdf
2. Douzon, T., Duffner, S., Garcia, C., Espinas, J.: Improving information extraction on business documents with specific pre-training tasks. In: Document Analysis Systems 15th IAPR International Workshop, DAS 2022. LNCS, vol. 13237, pp. 111–125. Springer International Publishing, La Rochelle, France (2022). https://doi.org/10.1007/978-3-031-06555-2_8, http://hal.archives-ouvertes.fr/hal-03676134
3. Hendrycks, D., Burns, C., Chen, A., Ball, S.: CUAD: an expert-annotated NLP dataset for legal contract review. arXiv preprint arXiv:2103.06268 (2021)
4. Huang, Y., Lv, T., Cui, L., Lu, Y., Wei, F.: Layoutlmv3: pre-training for document AI with unified text and image masking. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 4083–4091. MM 2022, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3503161.3548112
5. Huang, Z., et al.: ICDAR 2019 competition on scanned receipt OCR and information extraction, pp. 1516–1520 (2019). http://arxiv.org/pdf/2103.10213.pdf
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献