Learning Elastic Costs to Shape Monge Displacements
AuthorsMichal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi Cameto
AuthorsMichal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi Cameto
Given a source and a target probability measure supported on , the Monge problem aims for the most efficient way to map one distribution to the other. This efficiency is quantified by defining a cost function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, . The benefits of using elastic costs, defined through a regularizer as , was recently highlighted in (Cuturi et al. 2023). Such costs shape the displacements of Monge maps , i.e., the difference between a source point and its image , by giving them a structure that matches that of the proximal operator of . In this work, we make two important contributions to the study of elastic costs: (i) For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the cost; (ii) We propose a loss to learn the parameter of a parameterized regularizer , and apply it in the case where . This regularizer promotes displacements that lie on a low dimensional subspace of , spanned by the rows of . We illustrate the success of our procedure on synthetic data, generated using our first contribution, in which we show near-perfect recovery of 's subspace using only samples. We demonstrate the applicability of this method by showing predictive improvements on single-cell data tasks.