Scaling Laws for Native Multimodal Models
AuthorsMustafa Shukor†‡, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord‡, Joshua Susskind, Alaaeldin El-Nouby
Scaling Laws for Native Multimodal Models
AuthorsMustafa Shukor†‡, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord‡, Joshua Susskind, Alaaeldin El-Nouby
Building general-purpose models that can effectively perceive the world through multimodal signals has been a long-standing goal. Current approaches involve integrating separately pre-trained components, such as connecting vision encoders to LLMs and continuing multimodal training. While such approaches exhibit remarkable sample efficiency, it remains an open question whether such late-fusion architectures are inherently superior. In this work, we revisit the architectural design of native multimodal models (NMMs) - those trained from the ground up on all modalities - and conduct an extensive scaling laws study, spanning 457 trained models with different architectures and training mixtures. Our investigation reveals no inherent advantage to late-fusion architectures over early-fusion ones, which do not rely on image encoders. On the contrary, early-fusion exhibits stronger performance at lower parameter counts, is more efficient to train, and is easier to deploy. Motivated by the strong performance of the early-fusion architectures, we show that incorporating Mixture of Experts (MoEs) allows for models that learn modality-specific weights, significantly enhancing performance.
†Work done during an internship at Apple.
‡Sorbonne University
Scaling Laws for Optimal Data Mixtures
September 26, 2025research area Methods and Algorithmsconference NeurIPS
Large foundation models are typically trained on data from multiple domains, with the data mixture—the proportion of each domain used—playing a critical role in model performance. The standard approach to selecting this mixture relies on trial and error, which becomes impractical for large-scale pretraining. We propose a systematic method to determine the optimal data mixture for any target domain using scaling laws. Our approach…
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
October 18, 2024research area Computer Visionconference NeurIPS
*Equal Contributors
Current multimodal and multitask foundation models like 4M or UnifiedIO show promising results, but in practice their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually rather small) number of modalities and tasks they are trained on. In this paper, we significantly expand upon the capabilities of 4M by training it on tens of highly diverse modalities and by performing…