Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes
Authors: Marin Ballu, Quentin Berthet
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide theoretical guarantees for convex objectives and experimental results showcasing it effectiveness on both synthetic and real-world data. |
| Researcher Affiliation | Collaboration | 1University of Cambridge, UK 2Google Deep Mind, Paris, France. Correspondence to: Quentin Berthet <qberthet@google.com>. |
| Pseudocode | Yes | Algorithm 1 Mirror Sinkhorn; Algorithm 2 Rounding algorithm (Altschuler et al., 2017) |
| Open Source Code | No | The paper does not provide any explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | We also include an illustration of our method on two datasets used in (Altschuler et al., 2017), following their experimental setup: we use as instances of OT random pairs from MNIST (10 in total), and simulated SQUARES data... The SNARE-seq data (Chen et al., 2019) consists of 1047 vectors in dimension 10 and 19 respectively. |
| Dataset Splits | No | The paper mentions using datasets like MNIST and SNARE-seq but does not provide specific train/validation/test split percentages, absolute sample counts for splits, or references to predefined splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We minimize this functional by taking λ = 3, with a k-NN graph taken for k = 5. We recall that in this case, n = 1, 047. We are applying a step-size regime proportional to 1/(t + 1), for T = 105 steps. |