View publication

Amidst rising appreciation for privacy and data usage rights, researchers have increasingly acknowledged the principle of data minimization, which holds that the accessibility, collection, and retention of subjects' data should be kept to the bare amount needed to answer focused research questions. Applying this principle to randomized controlled trials (RCTs), this paper presents algorithms for making accurate inferences from RCTs under stringent data retention and anonymization policies. In particular, we show how to use recursive algorithms to construct running estimates of treatment effects in RCTs, which allow individualized records to be deleted or anonymized shortly after collection. Devoting special attention to non-i.i.d. data, we further show how to draw robust inferences from RCTs by combining recursive algorithms with bootstrap and federated strategies.

Related readings and updates.

Understanding and Visualizing Data Iteration in Machine Learning

Successful machine learning (ML) applications require iterations on both modeling and the underlying data. While prior visualization tools for ML primarily focus on modeling, our interviews with 23 ML practitioners reveal that they improve model performance frequently by iterating on their data (e.g., collecting new data, adding labels) rather than their models. We also identify common types of data iterations and associated analysis tasks and…
See paper details

Data Platform for Machine Learning

In this paper, we present a purpose-built data management system, MLdp, for all machine learning (ML) datasets. ML applications pose some unique requirements different from common conventional data processing applications, including but not limited to: data lineage and provenance tracking, rich data semantics and formats, integration with diverse ML frameworks and access patterns, trial-and-error driven data exploration and evolution, rapid…
See paper details