View publication

Effective error analysis is critical for the successful development and deployment of CVML models. One approach to understanding model errors is to summarize the common characteristics of error samples. This can be particularly challenging in tasks that utilize unstructured, complex data such as images, where patterns are not always obvious. Another method is to analyze error distributions across pre-defined categories, which requires analysts to hypothesize about potential error causes in advance. Forming such hypotheses without access to explicit labels or annotations makes it difficult to isolate meaningful subgroups or patterns, however, as analysts must rely on manual inspection, prior expertise, or intuition. This lack of structured guidance can hinder a comprehensive understanding of where models fail. To address these challenges, we introduce VibE, a semantic error analysis workflow designed to identify where and why computer vision and machine learning (CVML) models fail at the subgroup level, even when labels or annotations are unavailable. VibE incorporates several core features to enhance error analysis: semantic subgroup generation, semantic summarization, candidate issue proposals, semantic concept search, and interactive subgroup analysis. By leveraging large foundation models (such as CLIP and GPT-4) alongside visual analytics, VibE enables developers to semantically interpret and analyze CVML model errors. This interactive workflow helps identify errors through subgroup discovery, supports hypothesis generation with auto-generated subgroup summaries and suggested issues, and allows hypothesis validation through semantic concept search and comparative analysis. Through three diverse CVML tasks and in-depth expert interviews, we demonstrate how VibE can assist error understanding and analysis.

Related readings and updates.

Generalizable Error Modeling for Human Data Annotation: Evidence from an Industry-Scale Search Data Annotation Program

Machine learning (ML) and artificial intelligence (AI) systems rely heavily on human-annotated data for training and evaluation. A major challenge in this context is the occurrence of annotation errors, as their effects can degrade model performance. This paper presents a predictive error model trained to detect potential errors in search relevance annotation tasks for three industry-scale ML applications (music streaming, video streaming, and…
See paper details

Fair SA: Sensitivity Analysis for Fairness in Face Recognition

As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition…
See paper details