Scaling Laws for Unsupervised Finetuning of LLMs
AuthorsLouis Béthune, David Grangier, Dan Busbridge, Eleonora Gualdoni, Marco Cuturi, Pierre Ablin
AuthorsLouis Béthune, David Grangier, Dan Busbridge, Eleonora Gualdoni, Marco Cuturi, Pierre Ablin
A widespread strategy for obtaining a language model that performs well in a target domain is to fine-tune it by training it to do unsupervised next-token prediction on data from that domain. Fine-tuning presents two challenges: i) if the amount of target data is limited, as is the case in most practical applications, the model will quickly overfit, and ii) the model will drift away from the original model and forget the pre-training distribution. This paper quantifies these two phenomena for several target domains, available target data, and model scales. We also measure the efficiency of mixing pre-training and target data for fine-tuning to avoid forgetting and mitigate overfitting. A key practical takeaway from our study is that including as little as 1 of pre-training data in the fine-tuning data mixture shields the model from forgetting the pre-training set.
Figure 1: As little as 1% of pre-training data is sufficient to shield against forgetting.
June 5, 2025research area Computer Vision, research area Methods and Algorithmsconference ICML
March 14, 2025research area Speech and Natural Language Processingconference ICASSP