Low-Rank Sinkhorn Factorization
Authors: Meyer Scetbon, Marco Cuturi, Gabriel Peyré
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments. |
| Researcher Affiliation | Collaboration | 1CREST, ENSAE 2Google Brain 3Ecole Normale Supérieure, PSL University 4CNRS |
| Pseudocode | Yes | Algorithm 1 Sinkhorn(K, a, b, δ) Inputs: K, a, b, δ, u repeat v b/KT u, u a/Kv until ku Kv ak1 + kv KT u bk1 < δ; Result: u, v |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper does not provide concrete access information (link, DOI, repository, or formal citation with authors/year) for any publicly available dataset used in the experiments. It mentions using "Gaussian mixture densities" and measures "supported on graphs" or "in R2", but no details on how to access these specific datasets. |
| Dataset Splits | No | The paper does not explicitly provide specific training/validation/test dataset split percentages, sample counts, or references to predefined splits needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper mentions experiments were run on various setups but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts. For example, it mentions "on graphs" or "in 2D" but no specific machine specs. |
| Software Dependencies | No | The paper does not specify version numbers for any software components or libraries used in the experiments. |
| Experiment Setup | Yes | For LOT, we set the lower bound on g to = 10 5. |