View publication

As the repository of publicly available pre-trained vision foundation models (VFMs) — such as CLIP, DINOv2, and SAM — grows, users face challenges in storage, memory, and computational efficiency when deploying multiple models concurrently. To address these concerns, we introduce a unique approach that merges the capabilities of multiple VFMs into a single efficient multi-task model. Our method, termed "joint distillation," seamlessly integrates teacher-student learning with self-distillation, operating with just unlabeled image data and drastically cutting down on computational requirements compared to traditional multi-task learning. In a practical demonstration of merging CLIP and SAM, we reveal that the resultant merged model, SAM-CLIP, not only maintains the foundational strengths of both parent models but also uncovers synergistic functions, such as text-prompted zero-shot segmentation. Given the increasing availability of VFMs, our methodology promises to deliver significant value in streamlining model deployment and operations.

Related readings and updates.

SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

This paper was accepted at the UniReps Workshop at NeurIPS 2023. The landscape of publicly available vision foundation models (VFMs), such as CLIP and Segment Anything Model (SAM), is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their pre-training objectives. For instance, CLIP excels in semantic understanding, while SAM specializes in spatial understanding for segmentation. In this work, we introduce a simple…
See paper details

Apple Natural Language Understanding Workshop 2023

Earlier this year, Apple hosted the Natural Language Understanding workshop. This two-day hybrid event brought together Apple and members of the academic research community for talks and discussions on the state of the art in natural language understanding.

In this post, we share highlights from workshop discussions and recordings of select workshop talks.

See event details