RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning
AuthorsTzu-Heng Huang†**, Sirajul Salekin, Javier Movellan, Frederic Sala†, Manjot Bilkhu
RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning
AuthorsTzu-Heng Huang†**, Sirajul Salekin, Javier Movellan, Frederic Sala†, Manjot Bilkhu
Dense image captioning is critical for cross-modal alignment in vision-language pretraining and text-to-image generation, but scaling expert-quality annotations is prohibitively expensive. While synthetic captioning via strong vision-language models (VLMs) is a practical alternative, supervised distillation often yields limited output diversity and weak generalization. Reinforcement learning (RL) could overcome these limitations, but its successes have so far been concentrated in verifiable domains that rely on deterministic checkers — a luxury not available in open-ended captioning. We address this bottleneck with RubiCap, a novel RL framework that derives fine-grained, sample-specific reward signals from LLM-written rubrics. RubiCap first assembles a diverse committee of candidate captions, then employs an LLM rubric writer to extract consensus strengths and diagnose deficiencies in the current policy. These insights are converted into explicit evaluation criteria, enabling an LLM judge to decompose holistic quality assessment and replace coarse scalar rewards with structured, multi-faceted evaluations. Across extensive benchmarks, RubiCap achieves the highest win rates on CapArena, outperforming supervised distillation, prior RL methods, human-expert annotations, and GPT-4V-augmented outputs. On CaptionQA, it demonstrates superior word efficiency: our 7B model matches Qwen2.5-VL-32B-Instruct, and our 3B model surpasses its 7B counterpart. Remarkably, using the compact RubiCap-3B as a captioner produces stronger pretrained VLMs than those trained on captions from proprietary models.
MobileCLIP2: Improving Multi-Modal Reinforced Training
September 22, 2025research area Computer Vision, research area Methods and AlgorithmsTransactions on Machine Learning Research (TMLR)
This paper received Featured Certification from Transactions on Machine Learning Research (TMLR) 2025.
Foundation image-text models such as CLIP with zero-shot capabilities enable a wide array of applications. MobileCLIP is a recent family of image-text models at 3-15ms latency and 50-150M parameters with state-of-the-art zero-shot accuracy. The main ingredients in MobileCLIP were its low-latency and light architectures and a novel multi-modal…
Revisit Large-Scale Image–Caption Data in Pre-training Multimodal Foundation Models
April 8, 2025research area Computer Vision, research area Methods and Algorithmsconference ICLR
Recent advancements in multimodal models highlight the value of rewritten captions for improving performance, yet key challenges remain. Notably, the role of synthetic captions and their interaction with original web-crawled AltTexts in pre-training is still unclear. Additionally, different multimodal foundation models may have distinct preferences for specific caption formats while the efforts of studying the optimal captions for each foundation…