Progressive Entropic Optimal Transport Solvers

Authors: Parnian Kassraie, Aram-Alexandre Pooladian, Michal Klein, James Thornton, Jonathan Niles-Weed, Marco Cuturi

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments to evaluate the performance of PROGOT across various datasets, on its ability to act as a map estimator, and to produce couplings between the source and target points. We also prove the statistical consistency of PROGOT when estimating OT maps.
Researcher Affiliation Collaboration Parnian Kassraie ETH Zurich, Apple pkassraie@ethz.ch Aram-Alexandre Pooladian New York University aram-alexandre.pooladian@nyu.edu Michal Klein Apple michalk@apple.com James Thornton Apple jamesthornton@apple.com Jonathan Niles-Weed New York University jnw@cims.nyu.edu Marco Cuturi Apple cuturi@apple.com
Pseudocode Yes Algorithm 1 SINK(a, X, b, Y, ε, τ, finit, ginit). Algorithm 2 PROGOT(a, X, b, Y, (εk, αk, τk)k) Algorithm 3 TProg[b, Y, (g(k), εk, αk)k]
Open Source Code Yes The code for PROGOT, is included in the OTT-JAX package [Cuturi et al., 2022b]. We include the base implementation of our main algorithm in Jax as supplementary material.
Open Datasets Yes We consider the sci-Plex single-cell RNA sequencing data from [Srivatsan et al., 2020] We consider the entire grayscale CIFAR10 dataset [Krizhevsky et al., 2009] single-cell multiplex data of Bunne et al. [2023] Gaussian Mixture data, using the dataset of Korotin et al. [2021]
Dataset Splits Yes To choose ε for entropic estimators, we split the training data to get an evaluation set and perform 5-fold cross-validation on the grid of {2 3, . . . , 23} ε0
Hardware Specification Yes Experiments were run on a single Nvidia A100 GPU for a total of 24 hours. Smaller experiments and debugging was performed on a single Mac Book M2 Max.
Software Dependencies No The paper mentions 'JAX' and 'OTT-JAX' as the framework where the code is implemented and available. However, it does not specify explicit version numbers for these software dependencies or other key libraries used.
Experiment Setup Yes In map experiments, unless mentioned otherwise, we run PROGOT for K = 16 steps, with a constant-speed schedule for αk, and the regularization schedule set via Algorithm 4 with β0 = 5 and sp {2 3, . . . , 23}. We choose the number of hidden layers for both as [128, 64, 64]. For the ICNN we use a learning rate η = 10 3, batch size b = 256 and train it using the Adam optimizer [Kingma and Ba, 2014] for 2000 iterations. For the Monge Gap we set the regularization constant λMG = 10, λcons = 0.1 and the Sinkhorn regularization to ε = 0.01. We train the Monge Gap in a similar setting, except that we set η = 0.01.