View publication

Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta demonstrates that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously and has lead to significant interest in the shuffle model of privacy

We show that random shuffling of nn data records that are input to ε0\varepsilon_0-differentially private local randomizers results in an (O((1eε0)eε0log(1/δ)n),δ)(O((1 − e^{-\varepsilon_0})\sqrt\frac{e^{\varepsilon_0} \log (1/\delta)}{n}), \delta)-differentially private algorithm. This significantly improves over previous work and achieves the asymptotically optimal dependence in ε0\varepsilon_0. Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees. Our work also yields an empirical method to derive tighter bounds the resulting ε and we show that it gets to within a small constant factor of the optimal bound. As a direct corollary of our analysis, we derive a simple and asymptotically optimal algorithm for discrete distribution estimation in the shuffle model of privacy. We also observe that our result implies the first asymptotically optimal privacy analysis of noisy stochastic gradient descent that applies to sampling without replacement.

Related readings and updates.

Apple Privacy-Preserving Machine Learning Workshop 2022

Earlier this year, Apple hosted the Privacy-Preserving Machine Learning (PPML) workshop. This virtual event brought Apple and members of the academic research communities together to discuss the state of the art in the field of privacy-preserving machine learning through a series of talks and discussions over two days.

See event details

A Survey on Privacy from Statistical, Information and Estimation-Theoretic Views

The privacy risk has become an emerging challenge in both information theory and computer science due to the massive (centralized) collection of user data. In this paper, we overview privacy-preserving mechanisms and metrics from the lenses of information theory, and unify different privacy metrics, including f-divergences, Renyi divergences, and differential privacy, by the probability likelihood ratio (and the logarithm of it). We introduce…
See paper details