Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections
Authors: Kimia Nadjahi, Alain Durmus, Pierre E Jacob, Roland Badeau, Umut Simsekli
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem. |
| Researcher Affiliation | Academia | Kimia Nadjahi1 , Alain Durmus2, Pierre E. Jacob3, Roland Badeau1, Umut Sim sekli4 1: LTCI, Télécom Paris, Institut Polytechnique de Paris, France 2: Université Paris-Saclay, ENS Paris-Saclay, CNRS, Centre Borelli, F-91190 Gif-sur-Yvette, France 3: Department of Information Systems, Decision Sciences and Statistics, ESSEC Business School, Cergy, France 4: INRIA Département d Informatique de l École Normale Supérieure, PSL Research University, Paris, France |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our empirical results can be reproduced with our open source code2. 2See https://github.com/kimiandj/fast_sw |
| Open Datasets | Yes | models on MNIST and Celeb A |
| Dataset Splits | Yes | In each setting, we generate two sets of d-dimensional samples, denoted by {x(j)}n j=1 and {y(j)}n j=1 with n = 104 |
| Hardware Specification | No | The paper mentions that models were trained 'On GPU' and 'on CPU' and refers to 'GPU-accelerated implementation', but does not provide specific hardware details such as exact GPU/CPU models or processor types. |
| Software Dependencies | No | The paper states 'the models were trained using Py Torch', but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | In each setting, we generate two sets of d-dimensional samples, denoted by {x(j)}n j=1 and {y(j)}n j=1 with n = 104 |