View publication

We consider a sequential setting in which a single dataset of individuals is used to perform adaptively-chosen analyses, while ensuring that the differential privacy loss of each participant does not exceed a pre-specified privacy budget. The standard approach to this problem relies on bounding a worst-case estimate of the privacy loss over all individuals and all possible values of their data, for every single analysis. Yet, in many scenarios this approach is overly conservative, especially for "typical" data points which incur little privacy loss by participation in most of the analyses. In this work, we give a method for tighter privacy loss accounting based on the value of a personalized privacy loss estimate for each individual in each analysis. The accounting method relies on a new composition theorem for Rényi differential privacy, which allows adaptively-chosen privacy parameters. We apply our results to the analysis of noisy gradient descent and show how existing algorithms can be generalized to incorporate individual privacy accounting and thus achieve a better privacy-utility tradeoff.

Related readings and updates.

Protection Against Reconstruction and Its Applications in Private Federated Learning

In large-scale statistical learning, data collection and model fitting are moving increasingly toward peripheral devices—phones, watches, fitness trackers—away from centralized data collection. Concomitant with this rise in decentralized data are increasing challenges of maintaining privacy while allowing enough information to fit accurate, useful statistical models. This motivates local notions of privacy—most significantly, local differential…
See paper details

Lower Bounds for Locally Private Estimation via Communication Complexity

We develop lower bounds for estimation under local privacy constraints—including differential privacy and its relaxations to approximate or Rényi differential privacy—by showing an equivalence between private estimation and communication-restricted estimation problems. Our results apply to arbitrarily interactive privacy mechanisms, and they also give sharp lower bounds for all levels of differential privacy protections, that is, privacy…
See paper details