View publication

*Equal Contributors

Two salient limitations have long hindered the relevance of optimal transport methods to machine learning. First, the O(n3)O(n^3) computational cost of standard sample-based solvers (when used on batches of nn samples) is prohibitive. Second, the mass conservation constraint makes OT solvers too rigid in practice: because they must match \textit{all} points from both measures, their output can be heavily influenced by outliers. A flurry of recent works has addressed these computational and modeling limitations. Still it has resulted in two separate strains of methods: While the computational outlook was much improved by entropic regularization, more recent O(n)O(n) linear-time \textit{low-rank} solvers hold the promise to scale up OT further. In terms of modeling flexibility, the rigidity of mass conservation has been eased for entropic regularized OT thanks to unbalanced variants of OT that can penalize couplings whose marginals deviate from those specified by the source and target distributions. The goal of this paper is to merge these two strains, low-rank and unbalanced, to achieve the promise of solvers that are both scalable and versatile. We propose custom algorithms to implement these extensions for the linear OT problem and its fused-Gromov-Wasserstein generalization, and demonstrate their practical relevance to challenging spatial transcriptomics matching problems. These algorithms are implemented in the ott-jax toolbox.

Related readings and updates.

Low-Rank Optimal Transport: Approximation, Statistics and Debiasing

The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on…
See paper details

Implicit Greedy Rank Learning in Autoencoders via Overparameterized Linear Networks

This paper was accepted at the workshop on Overparameterization: Pitfalls and Opportunities at the ICML 2021 conference. Deep linear networks trained with gradient descent yield low rank solutions, as is typically studied in matrix factorization. In this paper, we take a step further and analyze implicit rank regularization in autoencoders. We show greedy learning of low-rank latent codes induced by a linear sub-network at the autoencoder…
See paper details