UniGen-1.5: Enhancing Image Generation and Editing through Reward Unification in Reinforcement Learning
AuthorsRui Tian†, Mingfei Gao§‡, Haiming Gang, Jiasen Lu, Zhe Gan, Yinfei Yang, Zuxuan Wu†§, Afshin Dehghan
UniGen-1.5: Enhancing Image Generation and Editing through Reward Unification in Reinforcement Learning
AuthorsRui Tian†, Mingfei Gao§‡, Haiming Gang, Jiasen Lu, Zhe Gan, Yinfei Yang, Zuxuan Wu†§, Afshin Dehghan
We present UniGen-1.5, a unified multimodal large language model (MLLM) for advanced image understanding, generation and editing. Building upon UniGen, we comprehensively enhance the model architecture and training pipeline to strengthen the image understanding and generation capabilities while unlocking strong image editing ability. Especially, we propose a unified Reinforcement Learning (RL) strategy that improves both image generation and image editing jointly via shared reward models. To further enhance image editing performance, we propose a light Edit Instruction Alignment stage that significantly improves the editing instruction comprehension that is essential for the success of the RL training. Experimental results show that UniGen-1.5 demonstrates competitive understanding and generation performance. Specifically, UniGen-1.5 achieves 0.89 and 4.31 overall scores on GenEval and ImgEdit that surpass the state-of-the-art models such as BAGEL and reaching performance comparable to proprietary models such as GPT-Image-1.
GIE-Bench: Towards Grounded Evaluation for Text-Guided Image Editing
December 16, 2025research area Computer Vision
Editing images using natural language instructions has become a natural and expressive way to modify visual content; yet, evaluating the performance of such models remains challenging. Existing evaluation approaches often rely on image-text similarity metrics like CLIP, which lack precision. In this work, we introduce a new benchmark designed to evaluate text-guided image editing models in a more grounded manner, along two critical dimensions:…
UniGen: Enhanced Training & Test-Time Strategies for Unified Multimodal Understanding and Generation
September 22, 2025research area Computer Visionconference NeurIPS
We introduce UniGen, a unified multimodal large language model (MLLM) capable of image understanding and generation. We study the full training pipeline of UniGen from a data-centric perspective, including multi-stage pre-training, supervised fine-tuning, and direct preference optimization. More importantly, we propose a new Chain-of-Thought Verification (CoT-V) strategy for test-time scaling, which significantly boosts UniGen’s image generation…