Interpreting Convolutional Layers in DNN Model Based on Time–Frequency Representation of Emotional Speech
Author:
Smietanka Lukasz1ORCID, Maka Tomasz1ORCID
Affiliation:
1. 1 Faculty of Computer Science and Information Technology , West Pomeranian University of Technology , Szczecin, Zolnierska 52, 71-210 , Szczecin , Poland
Abstract
Abstract
The paper describes the relations of speech signal representation in the layers of the convolutional neural network. Using activation maps determined by the Grad-CAM algorithm, energy distribution in the time–frequency space and their relationship with prosodic properties of the considered emotional utterances have been analysed. After preliminary experiments with the expressive speech classification task, we have selected the CQT-96 time–frequency representation. Also, we have used a custom CNN architecture with three convolutional layers in the main experimental phase of the study. Based on the performed analysis, we show the relationship between activation levels and changes in the voiced parts of the fundamental frequency trajectories. As a result, the relationships between the individual activation maps, energy distribution, and fundamental frequency trajectories for six emotional states were described. The results show that the convolutional neural network in the learning process uses similar fragments from time–frequency representation, which are also related to the prosodic properties of emotional speech utterances. We also analysed the relations of the obtained activation maps with time-domain envelopes. It allowed observing the importance of the speech signals energy in classifying individual emotional states. Finally, we compared the energy distribution of the CQT representation in relation to the regions’ energy overlapping with masks of individual emotional states. In the result, we obtained information on the variability of energy distributions in the selected signal representation speech for particular emotions.
Publisher
Walter de Gruyter GmbH
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Hardware and Architecture,Modeling and Simulation,Information Systems
Reference42 articles.
1. A. Karim, A. Mishra, M. H. Newton, and A. Sattar, Machine learning interpretability: A science rather than a tool, vol. abs/1807.06722, 2018. 2. M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Cham: Springer International Publishing, 2014, pp. 818–833. 3. S. Das, N. N. Lønfeldt, A. K. Pagsberg, and L. H. Clemmensen, Towards interpretable and transferable speech emotion recognition: Latent representation based analysis of features, methods and corpora, 2021. 4. Q. Zhang, Y. N. Wu, and S.-C. Zhu, Interpretable convolutional neural networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018-06. 5. K. V. V. Girish, S. Konjeti, and J. Vepa, Inter-pretabilty of speech emotion recognition modelled using self-supervised speech and text pre-trained embeddings, in Proc. Interspeech 2022, 2022, pp. 4496–4500.
|
|