PINE: Efficient Norm-Bound Verification for Secret-Shared Vectors
AuthorsGuy N. Rothblum, Eran Omri, Junye Chen, Kunal Talwar
PINE: Efficient Norm-Bound Verification for Secret-Shared Vectors
AuthorsGuy N. Rothblum, Eran Omri, Junye Chen, Kunal Talwar
Secure aggregation of high-dimensional vectors is a fundamental primitive in federated statistics and learning. A two-server system such as PRIO allows for scalable aggregation of secret-shared vectors. Adversarial clients might try to manipulate the aggregate, so it is important to ensure that each (secret-shared) contribution is well-formed. In this work, we focus on the important and well-studied goal of ensuring that each contribution vector has bounded Euclidean norm. Existing protocols for ensuring bounded-norm contributions either incur a large communication overhead, or only allow for approximate verification of the norm bound. We propose Private Inexpensive Norm Enforcement (PINE): a new protocol that allows exact norm verification with little communication overhead. For high-dimensional vectors, our approach has a communication overhead of a few percent, compared to the 16-32x overhead of previous approaches.
PREAMBLE: Private and Efficient Aggregation via Block Sparse Vectors
September 29, 2025research area Methods and Algorithms, research area Privacyconference NeurIPS
We revisit the problem of secure aggregation of high-dimensional vectors in a two-server system such as Prio. These systems are typically used to aggregate vectors such as gradients in private federated learning, where the aggregate itself is protected via noise addition to ensure differential privacy. Existing approaches require communication scaling with the dimensionality, and thus limit the dimensionality of vectors one can efficiently…
Samplable Anonymous Aggregation for Private Federated Data Analytics
July 18, 2024research area Privacy, research area Tools, Platforms, Frameworks
We revisit the problem of designing scalable protocols for private statistics and private federated learning when each device holds its private data. Locally differentially private algorithms require little trust but are (provably) limited in their utility. Centrally differentially private algorithms can allow significantly better utility but require a trusted curator. This gap has led to significant interest in the design and implementation of…