On Fitting Flow Models with Large Sinkhorn Couplings
AuthorsStephen Zhang†**, Alireza Mousavi-Hosseini‡**, Michal Klein, Marco Cuturi
On Fitting Flow Models with Large Sinkhorn Couplings
AuthorsStephen Zhang†**, Alireza Mousavi-Hosseini‡**, Michal Klein, Marco Cuturi
Flow models transform data gradually from one modality (e.g. noise) onto another (e.g. images). Such models are parameterized by a time-dependent velocity field, trained to fit segments connecting pairs of source and target points. When the pairing between source and target points is given, training flow models boils down to a supervised regression problem. When no such pairing exists, as is the case when generating data from noise, training flows is much harder. A popular approach lies in picking source and target points independently. This can, however, lead to velocity fields that are slow to train, but also costly to integrate at inference time. In theory, one would greatly benefit from training flow models by sampling pairs from an optimal transport (OT) measure coupling source and target, since this would lead to a highly efficient flow solving the Benamou and Brenier dynamical OT problem. In practice, recent works have proposed to sample mini-batches of n source and n target points and reorder them using an OT solver to form better pairs. These works have advocated using batches of size n≈256, and considered OT solvers that return couplings that are either sharp (using e.g. the Hungarian algorithm) or blurred (using e.g. entropic regularization, a.k.a. Sinkhorn). We follow in the footsteps of these works by exploring the benefits of increasing n by three to four orders of magnitude, and look more carefully on the effect of the entropic regularization ε used in the Sinkhorn algorithm. Our analysis is facilitated by new scale invariant quantities to report the sharpness of a coupling, while our sharded computations across multiple GPU or GPU nodes allow scaling up n. We show that in both synthetic and image generation tasks, flow models greatly benefit when fitted with large Sinkhorn couplings, with a low entropic regularization ε.
Progressive Entropic Optimal Transport Solvers
October 14, 2024research area Methods and Algorithmsconference NeurIPS
Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets. In this context, given two large point clouds of sizes and in , entropic OT (EOT) solvers have emerged as the most reliable tool to either solve the Kantorovich problem and output a coupling matrix, or to solve the Monge problem and learn a vector-valued push-forward map. While the…
Unbalanced Low-Rank Optimal Transport Solvers
January 22, 2024research area Methods and Algorithmsconference NeurIPS
*Equal Contributors
Two salient limitations have long hindered the relevance of optimal transport methods to machine learning. First, the computational cost of standard sample-based solvers (when used on batches of samples) is prohibitive. Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match \textit{all} points from both measures, their output can be heavily influenced by…