View publication

Optimal transport (OT) theory focuses, among all maps T:RdRdT:\mathbb{R}^d\rightarrow \mathbb{R}^d that can morph a probability measure onto another, on those that are the "thriftiest", i.e. such that the averaged cost c(x,T(x))c(\mathbf{x}, T(\mathbf{x})) between x\mathbf{x} and its image T(x)T(\mathbf{x}) be as small as possible. Many computational approaches have been proposed to estimate such Monge maps when cc is the 22\ell_2^2 distance, e.g., using entropic maps (Pooladian and Niles-Weed, 2021), or neural networks (Makkuva et al., 2020; Korotin et al., 2020). We propose a new model for transport maps, built on a family of translation invariant costs c(x,y):=h(xy)c(\mathbf{x},\mathbf{y}):=h(\mathbf{x}-\mathbf{y}), where h:=1222+τh:=\tfrac{1}{2}\|\cdot\|_2^2+\tau and τ\tau is a regularizer. We propose a generalization of the entropic map suitable for hh, and highlight a surprising link tying it with the Bregman centroids of the divergence DhD_h generated by hh, and the proximal operator of τ\tau. We show that choosing a sparsity-inducing norm for τ\tau results in maps that apply Occam's razor to transport, in the sense that the displacement vectors Δ(x):=T(x)x\Delta(\mathbf{x}):= T(\mathbf{x})-\mathbf{x} they induce are sparse, with a sparsity pattern that varies depending on x\mathbf{x}. We showcase the ability of our method to estimate meaningful OT maps for high-dimensional single-cell transcription data, in the 3400034000-dd space of gene counts for cells, without using dimensionality reduction, thus retaining the ability to interpret all displacements at the gene level.

Related readings and updates.

The Monge Gap: A Regularizer to Learn All Transport Maps

Optimal transport (OT) theory has been been used in machine learning to study and characterize maps that can push-forward efficiently a probability measure onto another. Recent works have drawn inspiration from Brenier's theorem, which states that when the ground cost is the squared-Euclidean distance, the "best" map to morph a continuous measure in P(Rd)\mathcal{P}(\mathbb{R}^d)P(Rd) into another must be the gradient of a convex function. To…
See paper details

Learning to Generate Radiance Fields of Indoor Scenes

People have an innate capability to understand the 3D visual world and make predictions about how the world could look from different points of view, even when relying on few visual observations. We have this spatial reasoning ability because of the rich mental models of the visual world we develop over time. These mental models can be interpreted as a prior belief over which configurations of the visual world are most likely to be observed. In this case, a prior is a probability distribution over the 3D visual world.

See highlight details