Author:
Ristea Nicolae-Cătălin,Anghel Andrei,Ionescu Radu Tudor
Abstract
AbstractSpeech classification tasks often require powerful language understanding models to grasp useful features, which becomes problematic when limited training data is available. To attain superior classification performance, we propose to harness the inherent value of multimodal representations by transcribing speech using automatic speech recognition models and translating the transcripts into different languages via pretrained translation models. We thus obtain an audio–textual (multimodal) representation for each data sample. Subsequently, we combine language-specific Bidirectional Encoder Representations from Transformers with Wav2Vec2.0 audio features via a novel cascaded cross-modal transformer (CCMT). Our model is based on two cascaded transformer blocks. The first one combines text-specific features from distinct languages, while the second one combines acoustic features with multilingual features previously learned by the first transformer block. We employed our system in the Requests Sub-Challenge of the ACM Multimedia 2023 Computational Paralinguistics Challenge. CCMT was declared the winning solution, obtaining an unweighted average recall of 65.41% and 85.87% for complaint and request detection, respectively. Moreover, we applied our framework on the Speech Commands v2 and HVB dialog data sets, surpassing previous studies reporting results on these benchmarks. Our code is freely available for download at: https://github.com/ristea/ccmt.
Publisher
Springer Science and Business Media LLC
Reference64 articles.
1. Abdu SA, Yousef AH, Salem A (2021) Multimodal video sentiment analysis using deep learning approaches, a survey. Inf Fusion 76:204–226
2. Akbari H, Yuan L, Qian R, Chuang W-H, Chang S-F, Cui Y, Gong B (2021) VATT: transformers for multimodal self-supervised learning from raw video, audio and text. In: Proceedings of NeurIPS, vol 34, pp 24206–24221
3. Baevski A, Zhou Y, Mohamed A, Auli M (2020) wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Proceedings of NeurIPS, vol 33, pp 12449–12460
4. Bhaskar J, Sruthi K, Nedungadi P (2015) Hybrid approach for emotion classification of audio conversation based on text and speech mining. Procedia Comput Sci 46:635–643
5. Boulahia SY, Amamra A, Madi MR, Daikh S (2021) Early, intermediate and late fusion strategies for robust deep learning-based multimodal action recognition. Mach Vis Appl 32(6):121