Randomized Controlled Trials without Data Retention
AuthorsWinston Chou
AuthorsWinston Chou
Amidst rising appreciation for privacy and data usage rights, researchers have increasingly acknowledged the principle of data minimization, which holds that the accessibility, collection, and retention of subjects' data should be kept to the bare amount needed to answer focused research questions. Applying this principle to randomized controlled trials (RCTs), this paper presents algorithms for making accurate inferences from RCTs under stringent data retention and anonymization policies. In particular, we show how to use recursive algorithms to construct running estimates of treatment effects in RCTs, which allow individualized records to be deleted or anonymized shortly after collection. Devoting special attention to non-i.i.d. data, we further show how to draw robust inferences from RCTs by combining recursive algorithms with bootstrap and federated strategies.
Earlier this year, Apple hosted the Privacy-Preserving Machine Learning (PPML) workshop. This virtual event brought Apple and members of the academic research communities together to discuss the state of the art in the field of privacy-preserving machine learning through a series of talks and discussions over two days.