Author:
Chen Yao,Sun Yanan,Lv Jiancheng,Jia Bijue,Huang Xiaoming
Abstract
AbstractHeart sound segmentation (HSS) aims to detect the four stages (first sound, systole, second heart sound and diastole) from a heart cycle in a phonocardiogram (PCG), which is an essential step in automatic auscultation analysis. Traditional HSS methods need to manually extract the features before dealing with HSS tasks. These artificial features highly rely on extraction algorithms, which often result in poor performance due to the different operating environments. In addition, the high-dimension and frequency characteristics of audio also challenge the traditional methods in effectively addressing HSS tasks. This paper presents a novel end-to-end method based on convolutional long short-term memory (CLSTM), which directly uses audio recording as input to address HSS tasks. Particularly, the convolutional layers are designed to extract the meaningful features and perform the downsampling, and the LSTM layers are developed to conduct the sequence recognition. Both components collectively improve the robustness and adaptability in processing the HSS tasks. Furthermore, the proposed CLSTM algorithm is easily extended to other complex heart sound annotation tasks, as it does not need to extract the characteristics of corresponding tasks in advance. In addition, the proposed algorithm can also be regarded as a powerful feature extraction tool, which can be integrated into the existing models for HSS. Experimental results on real-world PCG datasets, through comparisons to peer competitors, demonstrate the outstanding performance of the proposed algorithm.
Funder
National Natural Science Fund for Distinguished Young Scholar
The State Key Program of National Science Foundation of China
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences,General Environmental Science
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献