Multimodal Large Language Models exhibit impressive vision-language capabilities but often struggle with fine-grained spatial understanding. We introduce FERRET, a novel MLLM capable of understanding spatial referring of any shape or granularity within an image and accurately grounding open-vocabulary descriptions. A hybrid region representation is proposed to marry discrete coordinates with continuous visual features, endowing versatile referring aptitude. To fortify its capability, we construct a comprehensive refer-and-ground dataset that contains hierarchical spatial knowledge, flexible location-aware instruction tuning data, and promotes model robustness. Our evaluations reveal that FERRET demonstrates superior performance in conventional referring and grounding tasks as well as region-based and localization-demanded multimodal chatting, and showcases a notable reduction in object hallucination.

FERRET Spotlight
Figure 1: Ferret enables referring and grounding capabilities for multimodal large language model (LLM). In terms of referring, a user can refer a region or an object in point, box, or any free-form shapes. The regionN(green) in the input will be replaced by the proposed hybrid representation before fed into the LLM. In terms of grounding, Ferret is able to accurately ground any open-vocabulary descriptions. The boxN(red) in the output denotes the predicted bounding box coordinates.
FERRET Diagram
Figure 2: Overview of the proposed Ferret model architecture. (Left) The proposed hybrid region representation and spatial-aware visual sampler. (Right) Overall model architecture. All parameters besides the image encoder are trainable.

Related readings and updates.

ReALM: Reference Resolution as Language Modeling

Reference resolution is an important problem, one that is essential to understand and successfully handle contexts of different kinds. This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user's screen or those running in the background. While LLMs have been shown to be extremely powerful for a variety of tasks, their use in reference resolution, particularly for…
See paper details

Ferretv2: An Improved Baseline for Referring and Grounding

While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks. In this work, we unveil Ferret-v2, a significant upgrade to Ferret, with three key designs. (1) Any resolution grounding and referring: A flexible approach that effortlessly…
See paper details