Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
AuthorsAlex Fang**†, Hadi Pouransari, Matt Jordan‡, Alexander Toshev, Vaishaal Shankar§**, Ludwig Schmidt†, Tom Gunter¶**
Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
AuthorsAlex Fang**†, Hadi Pouransari, Matt Jordan‡, Alexander Toshev, Vaishaal Shankar§**, Ludwig Schmidt†, Tom Gunter¶**
Data filtering has become a powerful tool for improving model performance while reducing computational cost. However, as large language model compute budgets continue to grow, the limited data volume provided by heavily filtered and deduplicated datasets will become a practical constraint. In efforts to better understand how to proceed, we study model performance at various compute budgets and across multiple pre-training datasets created through data filtering and deduplication. We find that, given appropriate modifications to the training recipe, repeating existing aggressively filtered datasets for up to ten epochs can outperform training on the ten times larger superset for a single epoch across multiple compute budget orders of magnitude. While this finding relies on repeating the dataset for many epochs, we also investigate repeats within these datasets at the document level. We find that not all documents within a dataset are equal, and we can create better datasets relative to a token budget by explicitly manipulating the counts of individual documents. We conclude by arguing that even as large language models scale, data filtering remains an important direction of research.
Evaluating Sample Utility for Data Selection by Mimicking Model Weights
September 23, 2025research area Computer Vision, research area Data Science and AnnotationWorkshop at ICML
This paper was accepted at the DataWorld (Data Curation) Workshop at ICML 2025.
Multimodal models are trained on large-scale web-crawled datasets, which often contain noise, bias, and irrelevant information. This motivates the use of data selection techniques, which can be divided into model-free variants, relying on heuristic rules and downstream datasets, and model-based approaches, such as those using influence functions. The former can be…
Data Filtering Networks
April 8, 2024research area Computer Vision, research area Methods and Algorithmsconference ICLR
Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a data filtering…