The Mason-Alberta Phonetic Segmenter: a forced alignment system based on deep neural networks and interpolation
Author:
Kelley Matthew C.1ORCID, Perry Scott James2ORCID, Tucker Benjamin V.23ORCID
Affiliation:
1. Department of English, Linguistics Program , George Mason University 3298 , Fairfax , VA , USA 2. Department of Linguistics , University of Alberta , Edmonton , AB , Canada 3. Department of Communication Sciences and Disorders , Northern Arizona University , Flagstaff , AZ , USA
Abstract
Abstract
Given an orthographic transcription, forced alignment systems automatically determine boundaries between segments in speech, facilitating the use of large corpora. In the present paper, we introduce a neural network-based forced alignment system, the Mason-Alberta Phonetic Segmenter (MAPS). MAPS serves as a testbed for two possible improvements we pursue for forced alignment systems. The first is treating the acoustic model as a tagger, rather than a classifier, motivated by the common understanding that segments are not truly discrete and often overlap. The second is an interpolation technique to allow more precise boundaries than the typical 10 ms limit in modern systems. During testing, all system configurations we trained significantly outperformed the state-of-the-art Montreal Forced Aligner in the 10 ms boundary placement tolerance threshold. The greatest difference achieved was a 28.13 % relative performance increase. The Montreal Forced Aligner began to slightly outperform our models at around a 30 ms tolerance. We also reflect on the training process for acoustic modeling in forced alignment, highlighting how the output targets for these models do not match phoneticians’ conception of similarity between phones and that reconciling this tension may require rethinking the task and output targets or how speech itself should be segmented.
Funder
Social Sciences and Humanities Research Council of Canada Kule Institute for Advanced Study
Publisher
Walter de Gruyter GmbH
Reference107 articles.
1. Abadi, Martín, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu & Xiaoqiang Zheng. 2016. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint. https://doi.org/10.48550/arXiv.1603.04467. 2. Abramson, Arthur S. & Leigh Lisker. 1973. Voice-timing perception in Spanish word-initial stops. Journal of Phonetics 1(1). 1–8. https://doi.org/10.1016/S0095-4470(19)31372-5. 3. Adda-Decker, Martine & Natalie D. Snoeren. 2011. Quantifying temporal speech reduction in French using forced speech alignment. Journal of Phonetics 39(3). 261–270. https://doi.org/10.1016/j.wocn.2010.11.011. 4. Ahn, Emily, P., Gina-Anne Levow, Richard, A. & EleanorChodroff. 2023. An outlier analysis of vowel formants from a corpus phonetics pipeline. In INTERSPEECH 2023, 2573–2577. Dublin, Ireland: ISCA. https://doi.org/10.21437/Interspeech.2023-1052 (accessed 30 August 2023). 5. Backley, Phillip. 2011. Introduction to element theory. Edinburgh: Edinburgh University Press.
|
|