View publication

This paper was accepted at the workshop "Learning from Time Series for Health" at NeurIPS 2022.

Heart rate (HR) dynamics in response to workout intensity and duration measure key aspects of an individual’s fitness and cardiorespiratory health. Models of exercise physiology have been used to characterize cardiorespiratory fitness in well-controlled laboratory settings, but face additional challenges when applied to wearables in noisy, real-world settings. Here, we introduce a hybrid machine learning model that combines a physiological model of HR and demand during exercise with neural network embeddings in order to learn user-specific fitness parameters. We apply this model at scale to a large set of workout data collected with wearables. We show this model can accurately predict HR response to exercise demand in new workouts. We further show that the learned embeddings correlate with traditional metrics that reflect cardiorespiratory fitness.

Related readings and updates.

Large-scale Training of Foundation Models for Wearable Biosignals

Tracking biosignals is crucial for monitoring wellness and preempting the development of severe medical conditions. Today, wearable devices can conveniently record various biosignals, creating the opportunity to monitor health status without disruption to one's daily routine. Despite the widespread use of wearable devices and existing digital biomarkers, the absence of curated data with annotated medical labels hinders the development of new…
See paper details

Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones

Respiratory rate (RR) is a clinical metric used to assess overall health and physical fitness. An individual’s RR can change due to normal activities like physical exertion during exercise or due to chronic and acute illnesses. Remote estimation of RR offers a cost-effective method to track disease progression and cardio-respiratory fitness over time. This work investigates a model-driven approach to estimate RR from short audio segments obtained…
See paper details