Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance
Authors: Kimia Nadjahi, Alain Durmus, Umut Simsekli, Roland Badeau
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the validity of our theory on both synthetic data and neural networks. ... We support our theory with experiments that are conducted on both synthetic and real data. |
| Researcher Affiliation | Academia | 1: LTCI, Télécom Paris, Institut Polytechnique de Paris, France 2: CMLA, ENS Cachan, CNRS, Université Paris-Saclay, France 3: Department of Statistics, University of Oxford, UK |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We provide the code to reproduce the experiments.2 ... 2See https://github.com/kimiandj/min_swe. |
| Open Datasets | Yes | We use the MNIST dataset, made of 60 000 training images and 10 000 test images of size 28 28. |
| Dataset Splits | No | The paper mentions 60,000 training images and 10,000 test images for the MNIST dataset but does not specify a validation split or percentages for data partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions using the ADAM optimizer [32] but does not provide specific version numbers for any software components or libraries, which is required for reproducibility. |
| Experiment Setup | Yes | We trained for 20 000 iterations with the ADAM optimizer [32]. Our training objective is MESWE of order 2 approximated with 20 random projections and 20 different generated datasets. We design a neural network with the fully-connected configuration given in [16, Appendix D]. |