View publication

Foundation models are trained on large-scale web-crawled datasets, which often contain noise, biases, and irrelevant information. This motivates the use of data selection techniques, which can be divided into model-free variants -- relying on heuristic rules and downstream datasets -- and model-based, e.g., using influence functions. The former can be expensive to design and risk introducing unwanted dependencies, while the latter are often computationally prohibitive. Instead, we propose an efficient, model-based approach using the Mimic Score, a new data quality metric that leverages the weights of a reference model to assess the usefulness of individual samples for training a new model. It relies on the alignment between gradients and a target direction induced by the reference model. Using the derived Mimic Scores, we develop Grad-Mimic, a framework that prioritizes samples for learning, creates effective filters, and automates data selection. Empirically, using Mimic Scores to guide training improves data efficiency, results in consistent performance gains across six image datasets, and includes enhancements to CLIP models. Moreover, Mimic Score-based filters improve upon existing filtering methods, e.g., cutting 4.7 million samples to train better CLIP models while offering accurate estimation of training dataset quality.

Related readings and updates.

Introducing Apple’s On-Device and Server Foundation Models

At the 2024 Worldwide Developers Conference, we introduced Apple Intelligence, a personal intelligence system integrated deeply into iOS 18, iPadOS 18, and macOS Sequoia.

Apple Intelligence is comprised of multiple highly-capable generative models that are specialized for our users’ everyday tasks, and can adapt on the fly for their current activity. The foundation models built into Apple Intelligence have been fine-tuned for user experiences such as writing and refining text, prioritizing and summarizing notifications, creating playful images for conversations with family and friends, and taking in-app actions to simplify interactions across apps.

See highlight details

Data Filtering Networks

Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a data filtering…
See paper details