Abstract
AbstractDeep learning has been applied to achieve significant progress in emotion recognition from multimedia data. Despite such substantial progress, existing approaches are hindered by insufficient training data, leading to weak generalisation under mismatched conditions. To address these challenges, we propose a learning strategy which jointly transfers emotional knowledge learnt from rich datasets to source-poor datasets. Our method is also able to learn cross-domain features, leading to improved recognition performance. To demonstrate the robustness of the proposed learning strategy, we conducted extensive experiments on several benchmark datasets including eNTERFACE, SAVEE, EMODB, and RAVDESS. Experimental results show that the proposed method surpassed existing transfer learning schemes by a significant margin.
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Hardware and Architecture,Media Technology,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献