Author:
Chen Yi-Ming,Hou Xiang-Ting,Lou Dong-Fang,Liao Zhi-Lin,Liu Cheng-Lin
Publisher
Springer Nature Switzerland
Reference26 articles.
1. Zhang, Y., Zhang, B., Wang, R., Cao, J., Li, C., Bao, Z.: Entity relation extraction as dependency parsing in visually rich documents. arXiv preprint: arXiv:2110.09915 (2021)
2. Xu, Y., et al.: LayoutXLM: multimodal pre-training for multilingual visually-rich document understanding. arXiv preprint: arXiv:2104.08836 (2021)
3. Xu, Y., et al.: LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. arXiv preprint: arXiv:2012.14740 (2020)
4. Devlin, J., Chang, W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint: arXiv:1810.04805 (2018)
5. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer 8, 143–195 (1999)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献