Automatic cross‐ and multi‐lingual recognition of dysphonia by ensemble classification using deep speaker embedding models

Author:

Aziz Dosti1ORCID,Sztahó Dávid1

Affiliation:

1. Department of Telecommunications and Media Informatics Budapest University of Technology and Economics Budapest Hungary

Abstract

AbstractMachine Learning (ML) algorithms have demonstrated remarkable performance in dysphonia detection using speech samples. However, their efficacy often diminishes when tested on languages different from the training data, raising questions about their suitability in clinical settings. This study aims to develop a robust method for cross‐ and multi‐lingual dysphonia detection that overcomes the limitation of language dependency in existing ML methods. We propose an innovative approach that leverages speech embeddings from speaker verification models, especially ECAPA and x‐vector and employs a majority voting ensemble classifier. We utilize speech features extracted from ECAPA and x‐vector embeddings to train three distinct classifiers. The significant advantage of these embedding models lies in their capability to capture speaker characteristics in a language‐independent manner, forming fixed‐dimensional feature spaces. Additionally, we investigate the impact of generating synthetic data within the embedding feature space using the Synthetic Minority Oversampling Technique (SMOTE). Our experimental results unveil the effectiveness of the proposed method for dysphonia detection. Compared to results obtained from x‐vector embeddings, ECAPA consistently demonstrates superior performance in distinguishing between healthy and dysphonic speech, achieving accuracy values of 93.33% and 96.55% in both cross‐lingual and multi‐lingual scenarios, respectively. This highlights the remarkable capabilities of speaker verification models, especially ECAPA, in capturing language‐independent features that enhance overall detection performance. The proposed method effectively addresses the challenges of language dependency in dysphonia detection. ECAPA embeddings, combined with majority voting ensemble classifiers, show significant potential for improving the accuracy and reliability of dysphonia detection in cross‐ and multi‐lingual scenarios.

Publisher

Wiley

Reference58 articles.

1. Voice pathology identification system using a deep learning approach based on unique feature selection sets

2. Amir O. Amir N. &Wolf M.(2007).A clinical comparison between MDVP and PRAAT softwares: Is there a difference? Models and analysis of vocal emissions for biomedical applications. In5th International workshop December 13–15 2007 Florence Italy (Atti; 33) (pp. 1000–1004).https://doi.org/10.1400/82432

3. Machine learning discriminates voice tremor in essential tremor and dysphonia

4. Speak and unspeak with praat;Boersma P.;Glot International,2001

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3