# Supervised Training of Conditional Monge Maps

In collaboration with ETH Zurich

AuthorsCharlotte Bunne, Andreas Krause, Marco Cuturi

content type paperpublished December 2022

In collaboration with ETH Zurich

AuthorsCharlotte Bunne, Andreas Krause, Marco Cuturi

Optimal transport (OT) theory describes general principles to define and select, among many possible choices, the most efficient way to map a probability measure onto another. That theory has been mostly used to estimate, given a pair of source and target probability measures *context* *global* map *all pairs* in the dataset *generalize* to produce meaningful maps *partially input convex neural networks*, for which we introduce a robust and efficient initialization strategy inspired by Gaussian approximations. We demonstrate the ability of CondOT to infer the effect of an arbitrary combination of genetic or therapeutic perturbations on single cells, using only observations of the effects of said perturbations separately.

Optimal transport (OT) theory focuses, among all maps that can morph a probability measure onto another, on those that are the "thriftiest", i.e. such that the averaged cost between and its image be as small as possible. Many computational approaches have been proposed to estimate such Monge maps when is the distance, e.g., using entropic maps (Pooladian and Niles-Weed, 2021), or neural networks (Makkuva et al., 2020;
Korotin et al., 2020)…

See paper detailsOptimal transport (OT) theory has been been used in machine learning to study and characterize maps that can push-forward efficiently a probability measure onto another.
Recent works have drawn inspiration from Brenier's theorem, which states that when the ground cost is the squared-Euclidean distance, the "best" map to morph a continuous measure in into another must be the gradient of a convex function.
To exploit that result, , Makkuva et al…

See paper details