Publisher
Springer Nature Switzerland
Reference29 articles.
1. Aradillas, J.C., Murillo-Fuentes, J.J., Olmos, P.M.: Boosting offline handwritten text recognition in historical documents with few labeled lines. IEEE Access 9, 76674–76688 (2021). https://ieeexplore.ieee.org/document/9438636
2. Bao, H., Dong, L., Wei, F.: BEiT: BERT pre-training of image transformers. In: International Conference on Learning Representations (2021). https://openreview.net/pdf?id=p-BhZSz59o4
3. Causer, T., Grint, K., Sichani, A.M., Terras, M.: ‘Making such bargain’: transcribe Bentham and the quality and cost-effectiveness of crowdsourced transcription. Digit. Sch. Hum. 33(3), 467–487 (2018). https://doi-org.ezproxy.uzh.ch/10.1093/llc/fqx064
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019). https://doi.org/10.48550/arXiv.1810.04805
5. Dosovitskiy, A., et al.: An image is worth $$16 \times 16$$ words: transformers for image recognition at scale. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=YicbFdNTTy
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献