View publication

Amidst rising appreciation for privacy and data usage rights, researchers have increasingly acknowledged the principle of data minimization, which holds that the accessibility, collection, and retention of subjects' data should be kept to the bare amount needed to answer focused research questions. Applying this principle to randomized controlled trials (RCTs), this paper presents algorithms for making accurate inferences from RCTs under stringent data retention and anonymization policies. In particular, we show how to use recursive algorithms to construct running estimates of treatment effects in RCTs, which allow individualized records to be deleted or anonymized shortly after collection. Devoting special attention to non-i.i.d. data, we further show how to draw robust inferences from RCTs by combining recursive algorithms with bootstrap and federated strategies.

Related readings and updates.

Apple Privacy-Preserving Machine Learning Workshop 2022

Earlier this year, Apple hosted the Workshop on Privacy-Preserving Machine Learning (PPML). This virtual event brought Apple and members of the academic research communities together to discuss the state of the art in the field of privacy-preserving machine learning through a series of talks and discussions over two days.

See event details

Understanding and Visualizing Data Iteration in Machine Learning

Successful machine learning (ML) applications require iterations on both modeling and the underlying data. While prior visualization tools for ML primarily focus on modeling, our interviews with 23 ML practitioners reveal that they improve model performance frequently by iterating on their data (e.g., collecting new data, adding labels) rather than their models. We also identify common types of data iterations and associated analysis tasks and…
See paper details