EnViTSA: Ensemble of Vision Transformer with SpecAugment for Acoustic Event Classification
Author:
Lim Kian Ming1ORCID, Lee Chin Poo1ORCID, Lee Zhi Yang2, Alqahtani Ali34ORCID
Affiliation:
1. Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia 2. DZH International Sdn. Bhd., Kuala Lumpur 55100, Malaysia 3. Department of Computer Science, King Khalid University, Abha 61421, Saudi Arabia 4. Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia
Abstract
Recent successes in deep learning have inspired researchers to apply deep neural networks to Acoustic Event Classification (AEC). While deep learning methods can train effective AEC models, they are susceptible to overfitting due to the models’ high complexity. In this paper, we introduce EnViTSA, an innovative approach that tackles key challenges in AEC. EnViTSA combines an ensemble of Vision Transformers with SpecAugment, a novel data augmentation technique, to significantly enhance AEC performance. Raw acoustic signals are transformed into Log Mel-spectrograms using Short-Time Fourier Transform, resulting in a fixed-size spectrogram representation. To address data scarcity and overfitting issues, we employ SpecAugment to generate additional training samples through time masking and frequency masking. The core of EnViTSA resides in its ensemble of pre-trained Vision Transformers, harnessing the unique strengths of the Vision Transformer architecture. This ensemble approach not only reduces inductive biases but also effectively mitigates overfitting. In this study, we evaluate the EnViTSA method on three benchmark datasets: ESC-10, ESC-50, and UrbanSound8K. The experimental results underscore the efficacy of our approach, achieving impressive accuracy scores of 93.50%, 85.85%, and 83.20% on ESC-10, ESC-50, and UrbanSound8K, respectively. EnViTSA represents a substantial advancement in AEC, demonstrating the potential of Vision Transformers and SpecAugment in the acoustic domain.
Funder
Telekom Malaysia Research & Development King Khalid University
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference47 articles.
1. Burton, R. (October, January 30). The elements of music: What are they, and who cares. Proceedings of the Music: Educating for Life, ASME XXth National Conference Proceedings, Adelaide, Australia. 2. Valenzise, G., Gerosa, L., Tagliasacchi, M., Antonacci, F., and Sarti, A. (2007, January 5–7). Scream and gunshot detection and localization for audio-surveillance systems. Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, London, UK. 3. Kahl, S., Hussein, H., Fabian, E., Schloßhauer, J., Thangaraju, E., Kowerko, D., and Eibl, M. (2017, January 25–29). Acoustic event classification using convolutional neural networks. Proceedings of the 47th Informatik 2017, Chemnitz, Germany. 4. Zhu, Y., Ming, Z., and Huang, Q. (July, January 30). SVM-based audio classification for content-based multimedia retrieval. Proceedings of the International Workshop on Multimedia Content Analysis and Mining, Weihai, China. 5. Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring;Bardeli;Pattern Recognit. Lett.,2010
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|