Using LLMs for Late Multimodal Sensor Fusion for Activity Recognition
AuthorsIlker Demirel, Karan Ketankumar Thakkar, Benjamin Elizalde, Miquel Espi Marques, Shirley Ren, Jaya Narain
Using LLMs for Late Multimodal Sensor Fusion for Activity Recognition
AuthorsIlker Demirel, Karan Ketankumar Thakkar, Benjamin Elizalde, Miquel Espi Marques, Shirley Ren, Jaya Narain
This paper was accepted at the Learning from Time Series for Health workshop at NeurIPS 2025.
Sensor data streams provide valuable information around activities and context for downstream applications, though integrating complementary information can be challenging. We show that large language models (LLMs) can be used for late fusion for activity classification from audio and motion time series data. We curated a subset of data for diverse activity recognition across contexts (e.g., household activities, sports) from the Ego4D dataset. Evaluated LLMs achieved 12-class zero- and one-shot classification F1-scores significantly above chance, with no task-specific training. Zero-shot classification via LLM-based fusion from modality-specific models can enable multimodal temporal applications where there is limited aligned training data for learning a shared embedding space. Additionally, LLM-based fusion can enable model deploying without requiring additional memory and computation for targeted application-specific multimodal models.
Speech Foundation Models Generalize to Time Series Tasks from Wearable Sensor Data
November 20, 2025research area Health, research area Methods and AlgorithmsWorkshop at NeurIPS
This paper was accepted at the Learning from Time Series for Health workshop at NeurIPS 2025.
Both speech and sensor time series data encode information in both the time- and frequency- domains, like spectral powers and waveform shapelets. We show that speech foundation models learn representations that generalize beyond the speech domain and achieve state-of-the-art performance on diverse time-series tasks from wearable sensors. Probes trained…
Towards Time-Series Reasoning with LLMs
December 3, 2024research area Methods and Algorithms, research area Speech and Natural Language Processingconference NeurIPS
Multi-modal large language models (MLLMs) have enabled numerous advances in understanding and reasoning in domains like vision, but we have not yet seen this broad success for time-series. Although prior works on time-series MLLMs have shown promising performance in time-series forecasting, very few works show how an LLM could be used for time-series reasoning in natural language. We propose a novel multi-modal time-series LLM approach that…