Statistical and Topological Properties of Sliced Probability Divergences
Authors: Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Simsekli
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we support our theory with numerical experiments on synthetic and real data. We present the numerical experiments that we conducted to illustrate our theoretical findings, and we provide the code to reproduce them2. |
| Researcher Affiliation | Collaboration | 1: LTCI, Télécom Paris, Institut Polytechnique de Paris, France 2: Centre Borelli, ENS Paris-Saclay, CNRS, Université Paris-Saclay, France 3: Laboratoire de Mathématiques d Orsay, CNRS, Université Paris-Saclay, France 4: HRL Laboratories, LLC., Malibu, CA, USA 5: Texas A&M University, College Station, TX, USA 6: Department of Statistics, University of Oxford, UK |
| Pseudocode | No | The paper contains theoretical derivations and descriptions of methods, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | We present the numerical experiments that we conducted to illustrate our theoretical findings, and we provide the code to reproduce them2. 2See https://github.com/kimiandj/sliced_div |
| Open Datasets | Yes | We use the MNIST [39] and CIFAR-10 [40, Chapter 3] datasets |
| Dataset Splits | No | The paper describes generating synthetic data and selecting random subsets from real datasets for experiments (e.g., "two sets of 500 samples i.i.d. from the d-dimensional Gaussian distribution N(0, Id)", "randomly select two subsets of n samples from the same dataset"), but it does not specify explicit training, validation, and test dataset splits in percentages or absolute counts for reproducibility. |
| Hardware Specification | No | The paper describes conducting numerical experiments but does not provide any specific hardware details such as CPU/GPU models, memory, or cloud computing instance specifications used for these experiments. |
| Software Dependencies | No | The paper refers to certain algorithms (e.g., Sinkhorn's algorithm) and tools (e.g., Gaussian kernel for MMD), but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, NumPy x.x). |
| Experiment Setup | Yes | We consider two sets of 500 samples i.i.d. from the d-dimensional Gaussian distribution N(0, Id), and we approximate SW2 between the empirical distributions with a Monte Carlo scheme that uses a high number of projections L = 10 000. |