Successful machine learning (ML) applications require iterations on both modeling and the underlying data. While prior visualization tools for ML primarily focus on modeling, our interviews with 23 ML practitioners reveal that they improve model performance frequently by iterating on their data (e.g., collecting new data, adding labels) rather than their models. We also identify common types of data iterations and associated analysis tasks and challenges. To help attribute data iterations to model performance, we design a collection of interactive visualizations and integrate them into a prototype, Chameleon, that lets users compare data features, training/testing splits, and performance across data versions. We present two case studies where developers apply Chameleon to their own evolving datasets on production ML projects. Our interface helps them verify data collection efforts, find failure cases stretching across data versions, capture data processing changes that impacted performance, and identify opportunities for future data iterations.
Related readings and updates.
Apple had three papers accepted at the conference of Human-Computer Interaction (CHI), the premier international conference on interactive technology, in April 2020. Researchers from across the world gather at CHI to discuss, research, and design new ways for people to interact using technology. Although the conference was not held this year, you can read the accepted papers below.