Merge Vision Foundation Models via Multi-Task Distillation
AuthorsHaoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pour Ansari
AuthorsHaoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pour Ansari
As the repository of publicly available pre-trained vision foundation models (VFMs) — such as CLIP, DINOv2, and SAM — grows, users face challenges in storage, memory, and computational efficiency when deploying multiple models concurrently. To address these concerns, we introduce a unique approach that merges the capabilities of multiple VFMs into a single efficient multi-task model. Our method, termed "joint distillation," seamlessly integrates teacher-student learning with self-distillation, operating with just unlabeled image data and drastically cutting down on computational requirements compared to traditional multi-task learning. In a practical demonstration of merging CLIP and SAM, we reveal that the resultant merged model, SAM-CLIP, not only maintains the foundational strengths of both parent models but also uncovers synergistic functions, such as text-prompted zero-shot segmentation. Given the increasing availability of VFMs, our methodology promises to deliver significant value in streamlining model deployment and operations.
May 6, 2024research area Computer Visionconference ICML
Vision Foundation Models (VFMs) pretrained on massive datasets exhibit impressive performance on various downstream tasks, especially with limited labeled target data. However, due to their high inference compute cost, these models cannot be deployed for many real-world applications. Motivated by this, we ask the following important question, "How can we leverage the knowledge from a large VFM to train a small task-specific model for a new target...
November 30, 2023research area Computer Vision, research area Methods and Algorithmsconference NeurIPS, Workshop at CVPR
This paper was accepted at the UniReps Workshop at NeurIPS 2023, and the eLVM Workshop at CVPR 2024.
The landscape of publicly available vision foundation models (VFMs), such as CLIP and Segment Anything Model (SAM), is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their pre-training objectives. For instance, CLIP excels in semantic understanding, while SAM specializes in spatial understanding for segmentation. In...