Affiliation:
1. Harvard University, Cambridge, USA
2. Massachusetts Institute of Technology (MIT), Cambridge, USA
Abstract
Scholars in the humanities heavily rely on ancient manuscripts to study history, religion, and socio-political structures of the past. Significant efforts have been devoted to digitizing these precious manuscripts using OCR technology. However, most manuscripts have been blemished over the centuries, making it unrealistic for OCR programs to accurately capture faded characters. This work presents the Transformer + Confidence Score mechanism architecture for post-processing Google’s Tibetan OCR-ed outputs. According to the Loss and Character Error Rate metrics, our Transformer + Confidence Score mechanism architecture proves superior to the Transformer, LSTM-to-LSTM, and GRU-to-GRU architectures. Our method can be adapted to any language dealing with post-processing OCR outputs.
Publisher
Association for Computing Machinery (ACM)
Reference16 articles.
1. Neural machine translation by jointly learning to align and translate;Bahdanau Dzmitry;arXiv preprint arXiv:1409.0473,2014
2. A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning
3. Incorporating Copying Mechanism in Sequence-to-Sequence Learning
4. Byte-level grammatical error correction using synthetic and curated corpora;Ingólfsdóttir Svanhvít Lilja;arXiv preprint arXiv:2305.17906,2023
5. Data Recombination for Neural Semantic Parsing