Trade-offs in Data Memorization via Strong Data Processing Inequalities
AuthorsVitaly Feldman, Guy Kornowski†**, Xin Lyu‡**
AuthorsVitaly Feldman, Guy Kornowski†**, Xin Lyu‡**
Recent research demonstrated that training large language models involves memorization of a significant fraction of training data. Such memorization can lead to privacy violations when training on sensitive user data and thus motivates the study of data memorization's role in learning. In this work, we develop a general approach for proving lower bounds on excess data memorization, that relies on a new connection between strong data processing inequalities and data memorization. We then demonstrate that several simple and natural binary classification problems exhibit a trade-off between the number of samples available to a learning algorithm, and the amount of information about the training data that a learning algorithm needs to memorize to be accurate. In particular, bits of information about the training data need to be memorized when -dimensional examples are available, which then decays as the number of examples grows at a problem-specific rate. Further, our lower bounds are generally matched (up to logarithmic factors) by simple learning algorithms. We also extend our lower bounds to more general mixture-of-clusters models. Our definitions and results build on the work of Brown et al (2021) and address several limitations of the lower bounds in their work.
December 3, 2020research area Methods and Algorithmsconference NeurIPS
December 2, 2020research area Generalconference NeurIPS
Apple sponsored the Neural Information Processing Systems (NeurIPS) conference, which was held virtually from December 6 to 12. NeurIPS is a global conference focused on fostering the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects.