VeCLIP: Improving CLIP Training via Visual-enriched Captions
AuthorsJeff Lai*, Haotian Zhang*, Bowen Zhang, Wentao Wu, Felix Bai, Aleksei Timofeev, Xianzhi Du, Zhe Gan, Jiulong Shan, Chen-Nee Chuah, Yinfei Yang, Meng Cao
VeCLIP: Improving CLIP Training via Visual-enriched Captions
AuthorsJeff Lai*, Haotian Zhang*, Bowen Zhang, Wentao Wu, Felix Bai, Aleksei Timofeev, Xianzhi Du, Zhe Gan, Jiulong Shan, Chen-Nee Chuah, Yinfei Yang, Meng Cao
*Equal Contributors
Large-scale web-crawled datasets are fundamental for the success of pre-training vision-language models, such as CLIP. However, the inherent noise and potential irrelevance of web-crawled AltTexts pose challenges in achieving precise image-text alignment. Existing methods utilizing large language models (LLMs) for caption rewriting have shown promise on small, curated datasets like CC3M and CC12M. This study introduces a scalable pipeline for noisy caption rewriting. Unlike recent LLM rewriting techniques, we emphasize the incorporation of visual concepts into captions, termed as - . To ensure data diversity, we propose a novel mixed training scheme that optimizes the utilization of AltTexts alongside newly generated VeCap. We showcase the adaptation of this method for training CLIP on large-scale web-crawled datasets, termed VeCLIP. Employing this cost-effective pipeline, we effortlessly scale our dataset up to 300 million samples named VeCap dataset. Our results show significant advantages in image-text alignment and overall model performance. For example, VeCLIP achieves up to gain in COCO and Flickr30k retrieval tasks under the 12M setting. For data efficiency, VeCLIP achieves gain while only using of the data employed in the vanilla CLIP and in ALIGN. We also note the VeCap data is complementary with other well curated datasets good for zero-shot classification tasks. When combining VeCap and DFN, our model can achieve strong performance on both of image-text retrieval and zero-shot classification tasks, e.g. accuracy@1 on ImageNet zero-shot for a H/14 model.
MobileCLIP2: Improving Multi-Modal Reinforced Training
September 22, 2025research area Computer Vision, research area Methods and AlgorithmsTransactions on Machine Learning Research (TMLR)
This paper received Featured Certification from Transactions on Machine Learning Research (TMLR) 2025.
Foundation image-text models such as CLIP with zero-shot capabilities enable a wide array of applications. MobileCLIP is a recent family of image-text models at 3-15ms latency and 50-150M parameters with state-of-the-art zero-shot accuracy. The main ingredients in MobileCLIP were its low-latency and light architectures and a novel multi-modal…
Revisit Large-Scale Image–Caption Data in Pre-training Multimodal Foundation Models
April 8, 2025research area Computer Vision, research area Methods and Algorithmsconference ICLR
Recent advancements in multimodal models highlight the value of rewritten captions for improving performance, yet key challenges remain. Notably, the role of synthetic captions and their interaction with original web-crawled AltTexts in pre-training is still unclear. Additionally, different multimodal foundation models may have distinct preferences for specific caption formats while the efforts of studying the optimal captions for each foundation…