Environmental Sound Classification Framework Based on L-mHP Features and SE-ResNet50 Network Model
Author:
Huang Mengxiang1ORCID, Wang Mei12, Liu Xin12, Kan Ruixiang3ORCID, Qiu Hongbing34
Affiliation:
1. College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, China 2. Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin 541006, China 3. School of Information and Communication, Guilin University of Electronic Technology, Guilin 541006, China 4. Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin University of Electronic Technology, Guilin 541006, China
Abstract
Environmental sound classification (ESC) tasks are attracting more and more attention. Due to the complexity of the scene and personnel mobility, there are some difficulties in understanding and generating environmental sound models for ESC tasks. To address these key issues, this paper proposes an audio classification framework based on L-mHP features and the SE-ResNet50 model and improves a dual-channel data enhancement scheme based on a symmetric structure for model training. Firstly, this paper proposes the L-mHP feature to characterize environmental sound. The L-mHP feature is a three-channel feature consisting of a Log-Mel spectrogram, a harmonic spectrogram, and a percussive spectrogram. The harmonic spectrogram and percussive spectrogram can be obtained by harmonic percussive source separation (HPSS) of a Log-Mel spectrogram. Then, an improved audio classification model SE-ResNet50 is proposed based on the ResNet-50 model. In this paper, a dual-channel data enhancement scheme based on a symmetric structure is promoted, which not only makes the audio variants more diversified, but also makes the model focus on learning the time–frequency mode in the acoustic features during the training process, so as to improve the generalization performance of the model. Finally, the audio classification experiment of the framework is carried out on public datasets. An experimental accuracy of 94.92%, 99.67%, and 90.75% was obtained on ESC-50, ESC-10 and UrbanSound8K datasest, respectively. In order to simulate the classification performance of the framework in the actual environment, the framework was also evaluated on a self-made sound dataset with different signal-to-noise ratios. The experimental results show that the proposed audio classification framework has good robustness and feasibility.
Funder
National Natural Science Foundation of China Innovation Project of GUET Graduate Education
Subject
Physics and Astronomy (miscellaneous),General Mathematics,Chemistry (miscellaneous),Computer Science (miscellaneous)
Reference32 articles.
1. Jin, Q., and Liang, J. (2016, January 6–9). Video Description Generation using Audio and Visual Cues. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval (ICMR’16). Association for Computing Machinery, New York, NY, USA. 2. Jiang, D.-N., Lu, L., Zhang, H.-J., Tao, J.-H., and Cai, L.-H. (2002, January 26–29). Music type classification by spectral contrast feature. Proceedings of the IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland. 3. Cotton, C.V., and Ellis, D.P.W. (2011, January 16–19). Spectral vs. spectro-temporal features for acoustic event detection. Proceedings of the 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA. 4. Li, R., Yin, B., Cui, Y., Du, Z., and Li, K. (2020, January 11–13). Research on Environmental Sound Classification Algorithm Based on Multi-feature Fusion. Proceedings of the 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China. 5. D-S sound classification based on double two-stream convolution and multi-feature fusion;Wu;Appl. Res. Comput.,2022
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|