Statistical, Robustness, and Computational Guarantees for Sliced Wasserstein Distances

Authors: Sloan Nietert, Ziv Goldfeld, Ritwik Sadhu, Kengo Kato

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theory is validated by numerical experiments, which altogether provide a comprehensive quantitative account of the scalability question. and 6 Empirical Results
Researcher Affiliation Academia Sloan Nietert Cornell University nietert@cs.cornell.edu Ritwik Sadhu Cornell University rs2526@cornell.edu Ziv Goldfeld Cornell University goldfeld@cornell.edu Kengo Kato Cornell University kk976@cornell.edu
Pseudocode Yes Algorithm 1 Projected subgradient method for w2 2
Open Source Code Yes The code for all experiments and figures is publicly available at https://github.com/swnietert/SWD_guarantees.
Open Datasets No Our experiments use only synthetic data.
Dataset Splits No The paper describes how samples are generated for experiments (e.g., 'n = 500' or 'n = 10dϵ^2 samples'), but it does not specify any training, validation, or test dataset splits.
Hardware Specification Yes All computations were performed on a single machine with an Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz CPU, 64 GB of RAM, and an NVIDIA GeForce RTX 3090 GPU.
Software Dependencies No The paper states 'All code was written in Python 3.9 and relies on the NumPy, SciPy, Matplotlib, and scikit-learn libraries', but only Python has a version number specified. Other library versions are not provided.
Experiment Setup Yes Sample size is fixed at n = 500 and computation times are averaged over 10 trials. and For d {10, 20, . . . , 200}, we take n = 10dϵ 2 samples, with (1 ϵ)n drawn i.i.d. from N(0, Id) and ϵn from a product noise distribution used in [19], with ϵ = 0.1.