Author:
Ru Jiawei,Wang Lizhong,Jia Maoshen,Wen Liang,Wang Chunxi,Zhao Yuhao,Wang Jing
Abstract
Abstract
This paper proposes a transform domain audio coding method based on deep complex networks. In the proposed codec, the time-frequency spectrum of the audio signal is fed to the encoder which consists of complex convolutional blocks and a frequency-temporal modeling module to obtain the extracted features which are then quantized with a target bitrate by the vector quantizer. The structure of the decoder which reconstruct the time-frequency spectrum of the audio from quantized features is symmetrical to the encoder. In this paper, a structure combining the complex multi-head self-attention module and the complex long short-term memory is proposed to capture both frequency and temporal dependencies. Subjective and objective evaluation tests show the advantage of the proposed method.