Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
AuthorsAlex Fang**†, Hadi Pouransari, Matt Jordan‡, Alexander Toshev, Vaishaal Shankar§**, Ludwig Schmidt†, Tom Gunter¶**
Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
AuthorsAlex Fang**†, Hadi Pouransari, Matt Jordan‡, Alexander Toshev, Vaishaal Shankar§**, Ludwig Schmidt†, Tom Gunter¶**
Data filtering has become a powerful tool for improving model performance while reducing computational cost. However, as large language model compute budgets continue to grow, the limited data volume provided by heavily filtered and deduplicated datasets will become a practical constraint. In efforts to better understand how to proceed, we study model performance at various compute budgets and across multiple pre-training datasets created through data filtering and deduplication. We find that, given appropriate modifications to the training recipe, repeating existing aggressively filtered datasets for up to ten epochs can outperform training on the ten times larger superset for a single epoch across multiple compute budget orders of magnitude. While this finding relies on repeating the dataset for many epochs, we also investigate repeats within these datasets at the document level. We find that not all documents within a dataset are equal, and we can create better datasets relative to a token budget by explicitly manipulating the counts of individual documents. We conclude by arguing that even as large language models scale, data filtering remains an important direction of research.
On the Impossibility of Separating Intelligence from Judgment: The Computational Intractability of Filtering for AI Alignment
March 3, 2026research area Methods and Algorithmsconference ICLR
With the increased deployment of large language models (LLMs), one concern is their potential misuse for generating harmful content. Our work studies the alignment challenge, with a focus on filters to prevent the generation of unsafe information. Two natural points of intervention are the filtering of the input prompt before it reaches the model, and filtering the output after generation. Our main results demonstrate computational challenges in…
Data Filtering Networks
April 8, 2024research area Computer Vision, research area Methods and Algorithmsconference ICLR
Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a data filtering…