When Does Optimizing a Proper Loss Yield Calibration?
In collaboration with Columbia University, Stanford University
AuthorsJarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran
When Does Optimizing a Proper Loss Yield Calibration?
In collaboration with Columbia University, Stanford University
AuthorsJarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran
Optimizing proper loss functions is popularly believed to yield predictors with good calibration properties; the intuition being that for such losses, the global optimum is to predict the ground-truth probabilities, which is indeed calibrated. However, typical machine learning models are trained to approximately minimize loss over restricted families of predictors, that are unlikely to contain the ground truth. Under what circumstances does optimizing proper loss over a restricted family yield calibrated models? What precise calibration guarantees does it give? In this work, we provide a rigorous answer to these questions. We replace the global optimality with a local optimality condition stipulating that the (proper) loss of the predictor cannot be reduced much by post-processing its predictions with a certain family of Lipschitz functions. We show that any predictor with this local optimality satisfies smooth calibration as defined in Kakade-Foster (2008), Błasiok et al. (2023). Local optimality is plausibly satisfied by well-trained DNNs, which suggests an explanation for why they are calibrated from proper loss minimization alone. Finally, we show that the connection between local optimality and calibration error goes both ways: nearly calibrated predictors are also nearly locally optimal.
A decision-theoretic characterization of perfect calibration is that an agent seeking to minimize a proper loss in expectation cannot improve their outcome by post-processing a perfectly calibrated predictor. Hu and Wu (FOCS’24) use this to define an approximate calibration measure called calibration decision loss (CDL), which measures the maximal improvement achievable by any post-processing over any proper loss. Unfortunately, CDL turns out to…
A Unifying Theory of Distance from Calibration
June 13, 2023research area Fairness, research area Methods and Algorithmsconference ACM STOC
We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE)…