View publication

This paper was accepted at the ICML 2021 conference as well as the Theory and Practice of Differential Privacy workshop at the ICML 2021 conference.

Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting. In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server (such as when constructing histograms over large domain or learning a high-dimensional model). This has led to significant efforts on reducing the communication cost of LDP algorithms.

At the same time LDP reports are known to have relatively little information about the user's data due to randomization. Several schemes are known that exploit this fact to design low-communication versions of LDP algorithm but all of them do so at the expense of a significant loss in utility. Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees. The practical implication of our result is that in typical applications the message can be compressed to the size of the server's pseudo-random generator seed. More generally, we relate the properties of an LDP randomizer to the power of a pseudo-random generator that suffices for compressing the LDP randomizer. From this general approach we derive low-communication algorithms for the problems of frequency estimation and high-dimensional mean estimation. Our algorithms are simpler and more accurate than existing low-communication LDP algorithms for these well-studied problems.

Related readings and updates.

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

*Equal Contributors To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support…
See paper details

Private Frequency Estimation via Projective Geometry

In this work, we propose a new algorithm ProjectiveGeometryResponse (PGR) for locally differentially private (LDP) frequency estimation. For a universe size of kkk and with nnn users, our ε\varepsilonε-LDP algorithm has communication cost ⌈log⁡2k⌉\lceil\log_2k\rceil⌈log2​k⌉ bits in the private coin setting and εlog⁡2e+O(1)\varepsilon\log_2 e + O(1)εlog2​e+O(1) in the public coin setting, and has computation cost O(n+kexp⁡(ε)log⁡k)O(n +…
See paper details