Author:
Behre Piyush,Tan Sharman,Varadharajan Padma,Chang Shuangyu
Abstract
While speech recognition Word Error Rate (WER) has reached human parity for English, continuous speech recognition scenarios such as voice typing and meeting transcriptions still suffer from segmentation and punctuation problems, resulting from irregular pausing patterns or slow speakers. Transformer sequence tagging models are effective at capturing long bi-directional context, which is crucial for automatic punctuation. Automatic Speech Recognition (ASR) production systems, however, are constrained by real-time requirements, making it hard to incorporate the right context when making punctuation decisions. Context within the segments produced by ASR decoders can be helpful but limiting in overall punctuation performance for a continuous speech session. In this paper, we propose a streaming approach for punctuation or re-punctuation of ASR output using dynamic decoding windows and measure its impact on punctuation and segmentation accuracy across scenarios. The new system tackles over-segmentation issues, improving segmentation F0.5-score by 13.9%. Streaming punctuation achieves an average BLEUscore improvement of 0.66 for the downstream task of Machine Translation (MT).
Publisher
Academy and Industry Research Collaboration Center (AIRCC)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Multi Transcription-Style Speech Transcription Using Attention-Based Encoder-Decoder Model;2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU);2023-12-16
2. An Adaptive Speech Speed Algorithm for Improving Continuous Speech Recognition;2023 4th International Conference on Machine Learning and Computer Application;2023-10-27
3. Transformer-Based Punctuation Restoration Models for Indonesian with English Codeswitching Speech Transcripts;2023 10th International Conference on Advanced Informatics: Concept, Theory and Application (ICAICTA);2023-10-07