Learning the Relative Composition of EEG Signals Using Pairwise Relative Shift Pretraining
AuthorsChristopher Sandino, Sayeri Lala†**, Geeling Chau§**, Melika Ayoughi‡**, Behrooz Mahasseni, Ellen Zippi, Ali Moin, Erdrin Azemi, Hanlin Goh
Learning the Relative Composition of EEG Signals Using Pairwise Relative Shift Pretraining
AuthorsChristopher Sandino, Sayeri Lala†**, Geeling Chau§**, Melika Ayoughi‡**, Behrooz Mahasseni, Ellen Zippi, Ali Moin, Erdrin Azemi, Hanlin Goh
This paper was accepted at the Foundation Models for the Brain and Body workshop at NeurIPS 2025.
Self-supervised learning (SSL) offers a promising approach for learning electroencephalography (EEG) representations from unlabeled data, reducing the need for expensive annotations for clinical applications like sleep staging and seizure detection. While current EEG SSL methods predominantly use masked reconstruction strategies like masked autoencoders (MAE) that capture local temporal patterns, position prediction pretraining remains underexplored despite its potential to learn long-range dependencies in neural signals. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext task that predicts relative temporal shifts between randomly sampled EEG window pairs. Unlike reconstruction-based methods that focus on local pattern recovery, PARS encourages encoders to capture relative temporal composition and long-range dependencies inherent in neural signals. Through comprehensive evaluation on various EEG decoding tasks, we demonstrate that PARS-pretrained transformers consistently outperform existing pretraining strategies in label-efficient and transfer learning settings, establishing a new paradigm for self-supervised EEG representation learning.
**Work done during an Apple internship
†Stanford University
‡California Institute of Technology
§University of Amsterdam
MAEEG: Masked Auto-encoder for EEG Representation Learning
November 9, 2022research area Health, research area Methods and AlgorithmsWorkshop at NeurIPS
This paper was accepted at the Workshop on Learning from Time Series for Health at NeurIPS 2022.
Decoding information from bio-signals such as EEG, using machine learning has been a challenge due to the small data-sets and difficulty to obtain labels. We propose a reconstruction-based self-supervised learning model, the masked auto-encoder for EEG (MAEEG), for learning EEG representations by learning to reconstruct the masked EEG features using…
Subject-Aware Contrastive Learning for Biosignals
August 6, 2021research area Health, research area Methods and Algorithms
Datasets for biosignals, such as electroencephalogram (EEG) and electrocardiogram (ECG), often have noisy labels and have limited number of subjects (<100). To handle these challenges, we propose a self-supervised approach based on contrastive learning to model biosignals with a reduced reliance on labeled data and with fewer subjects. In this regime of limited labels and subjects, intersubject variability negatively impacts model performance…