1. Latha M, Shivakumar M, Manjula R. A study of acoustic characteristics, prosodic and distinctive features of dysarthric speech. Grenze Int J Comput Theory Eng (Spec Issue). 2018;2018:228–35.
2. Farhadipour A, Veisi H, Asgari M, Keyvanrad MA. Dysarthric speaker identification with different degrees of dysarthria severity using deep belief networks. ETRI J. 2018;40(5):643–52.
3. Mohammed SY, Ahmed SS, Brahim ZF, AsmaBouchair B. Improving dysarthric speech recognition using empirical mode decomposition and convolutional neural network. EURASIP J Audio Speech Music Process. 2020;1:1–7.
4. Young S, Evermann G, Gales M, Hain T, Kershaw D, Moore G, Odell J, Ollason D, Povey D, Valtchev V, et al. The htk book (for htk version 3.3). Cambridge University Engineering Department, 2005. 2006.
5. Oue S, Marxer R, Rudzicz F. Automatic dysfluency detection in dysarthric speech using deep belief networks. In: Proceedings of SLPAT 2015: 6th workshop on speech and language processing for assistive technologies. 2015. p. 60–4.