Model-Driven Heart Rate Estimation and Heart Murmur Detection Based on Phonocardiogram
AuthorsJingping Nie, Ran Liu, Behrooz Mahasseni, Erdrin Azemi, Vikramjit Mitra
AuthorsJingping Nie, Ran Liu, Behrooz Mahasseni, Erdrin Azemi, Vikramjit Mitra
This paper has been accepted at IEEE International Workshop on Machine Learning for Signal Process (MLSP) 2024.
Acoustic signals are crucial for health monitoring, particularly heart sounds which provide essential data like heart rate and detect cardiac anomalies such as murmurs. This study utilizes a publicly available phonocardiogram (PCG) dataset to estimate heart rate using model-driven methods and extends the best-performing model to a multi-task learning (MTL) framework for simultaneous heart rate estimation and murmur detection. Heart rate estimates are derived using a sliding window technique on heart sound snippets, analyzed with a combination of acoustic features (Mel spectrogram, cepstral coefficients, power spectral density, root mean square energy). Our findings indicate that a 2D convolutional neural network (2dCNN) is most effective for heart rate estimation, achieving a mean absolute error (MAE) of 1.312bpm. We systematically investigate the impact of different feature combinations and find that utilizing all four features yields the best results. The MTL model (2dCNN-MTL) achieves accuracy over 95% in murmur detection, surpassing existing models, while maintaining an MAE of 1.636bpm in heart rate estimation, satisfying the requirements stated by Association for the Advancement of Medical Instrumentation (AAMI).
Recent research has explored clinical monitoring, cardiovascular events, and even clinical lab values from wearables data. As adoption increases, wearables data may become crucial in public health applications like disease monitoring and the design of epidemiological studies.