View publication

The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank optimal transport (LOT) approach advocated in (Scetbon et al., 2021) holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT.

Related readings and updates.

Progressive Entropic Optimal Transport Solvers

Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets. In this context, given two large point clouds of sizes nnn and mmm in Rd\mathbb{R}^dRd, entropic OT (EOT) solvers have emerged as the most reliable tool to either solve the Kantorovich problem and output a n×mn\times mn×m coupling matrix, or to solve the Monge problem and learn a vector-valued push-forward map…
See paper details

Unbalanced Low-Rank Optimal Transport Solvers

*Equal Contributors Two salient limitations have long hindered the relevance of optimal transport methods to machine learning. First, the O(n3)O(n^3)O(n3) computational cost of standard sample-based solvers (when used on batches of nnn samples) is prohibitive. Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match \textit{all} points from both measures, their output can be heavily influenced by…
See paper details