Text-Conditional JEPA for Learning Semantically Rich Visual Representations
AuthorsChen Huang, Xianhang Li, Vimal Thilak, Etai Littwin, Josh Susskind
Text-Conditional JEPA for Learning Semantically Rich Visual Representations
AuthorsChen Huang, Xianhang Li, Vimal Thilak, Etai Littwin, Josh Susskind
Image-based Joint-Embedding Predictive Architecture (I-JEPA) offers a promising approach to visual self-supervised learning through masked feature prediction. However with the inherent visual uncertainty at masked positions, feature prediction remains challenging and may fail to learn semantic representations. In this work, we propose Text-Conditional JEPA (TC-JEPA) that uses image captions to reduce the prediction uncertainty. Specifically, we modulate the predicted patch features using a fine-grained text conditioner that computes sparse cross-attention over input text tokens. With such conditioning, patch features become predictable as a function of text, thus are more semantically meaningful. We show TC-JEPA improves downstream performance and training stability, with promising scaling properties. TC-JEPA also offers a new vision-language pretraining paradigm based on feature prediction only, outperforming contrastive methods on diverse tasks, especially those requiring fine-grained visual understanding and reasoning.
Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers
October 8, 2025research area Computer Vision, research area Methods and Algorithmsconference ICLR
Video Joint Embedding Predictive Architectures (V-JEPA) learn generalizable off-the-shelf video representation by predicting masked regions in latent space with an exponential moving average (EMA)-updated teacher. While EMA prevents representation collapse, it complicates scalable model selection and couples teacher and student architectures. We revisit masked-latent prediction and show that a frozen teacher suffices. Concretely, we (i) train a…
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks
November 20, 2024research area Computer Vision, research area Methods and Algorithmsconference NeurIPS
Two competing paradigms exist for self-supervised learning of data representations. Joint Embedding Predictive Architecture (JEPA) is a class of architectures in which semantically similar inputs are encoded into representations that are predictive of each other. A recent successful approach that falls under the JEPA framework is self-distillation, where an online encoder is trained to predict the output of the target encoder, sometimes using a…