Abstract
Abstract
Objective. Cardiopulmonary auscultation is promising to get smart due to the emerging of electronic stethoscopes. Cardiac and lung sounds often appear mixed at both time and frequency domain, hence deteriorating the auscultation quality and the further diagnosis performance. The conventional cardiopulmonary sound separation methods may be challenged by the diversity in cardiac/lung sounds. In this study, the data-driven feature learning advantage of deep autoencoder and the common quasi-cyclostationarity characteristic are exploited for monaural separation. Approach. Different from most of the existing separation methods that only handle the amplitude of short-time Fourier transform (STFT) spectrum, a complex-valued U-net (CUnet) with deep autoencoder structure, is built to fully exploit both the amplitude and phase information. As a common characteristic of cardiopulmonary sounds, quasi-cyclostationarity of cardiac sound is involved in the loss function for training. Main results. In experiments to separate cardiac/lung sounds for heart valve disorder auscultation, the averaged achieved signal distortion ratio (SDR), signal interference ratio (SIR), and signal artifact ratio (SAR) in cardiac sounds are 7.84 dB, 21.72 dB, and 8.06 dB, respectively. The detection accuracy of aortic stenosis can be raised from 92.21% to 97.90%. Significance. The proposed method can promote the cardiopulmonary sound separation performance, and may improve the detection accuracy for cardiopulmonary diseases.
Funder
Suzhou Science and Technology Project
Cardiovascular and Cerebrovascular Disease Discipline Group
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献