Multimodal Large Language Models exhibit impressive vision-language capabilities but often struggle with fine-grained spatial understanding. We introduce FERRET, a novel MLLM capable of understanding spatial referring of any shape or granularity within an image and accurately grounding open-vocabulary descriptions. A hybrid region representation is proposed to marry discrete coordinates with continuous visual features, endowing versatile referring aptitude. To fortify its capability, we construct a comprehensive refer-and-ground dataset that contains hierarchical spatial knowledge, flexible location-aware instruction tuning data, and promotes model robustness. Our evaluations reveal that FERRET demonstrates superior performance in conventional referring and grounding tasks as well as region-based and localization-demanded multimodal chatting, and showcases a notable reduction in object hallucination.

FERRET Spotlight
Figure 1: Ferret enables referring and grounding capabilities for multimodal large language model (LLM). In terms of referring, a user can refer a region or an object in point, box, or any free-form shapes. The regionN(green) in the input will be replaced by the proposed hybrid representation before fed into the LLM. In terms of grounding, Ferret is able to accurately ground any open-vocabulary descriptions. The boxN(red) in the output denotes the predicted bounding box coordinates.
FERRET Diagram
Figure 2: Overview of the proposed Ferret model architecture. (Left) The proposed hybrid region representation and spatial-aware visual sampler. (Right) Overall model architecture. All parameters besides the image encoder are trainable.

Related readings and updates.

Ferretv2: An Improved Baseline for Referring and Grounding

While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks. In this work, we unveil Ferret-v2, a significant upgrade to Ferret, with three key designs. (1) Any resolution grounding and referring: A flexible approach that effortlessly…
See paper details

AGRaME: Any Granularity Ranking with Multi-Vector Embeddings

Ranking is a fundamental and popular problem in search. However, existing ranking algorithms usually restrict the granularity of ranking to full passages or require a specific dense index for each desired level of granularity. Such lack of flexibility in granularity negatively affects many applications that can benefit from more granular ranking, such as sentence-level ranking for open-domain question-answering, or proposition-level ranking for…
See paper details