Generalized Sliced Wasserstein Distances

Authors: Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo Rohde

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we compare the numerical performance of the proposed distances on the generative modeling task of SW flows and report favorable results.
Researcher Affiliation Collaboration Soheil Kolouri1 , Kimia Nadjahi2 , Umut Sim sekli2,3, Roland Badeau2, Gustavo K. Rohde4 1: HRL Laboratories, LLC., Malibu, CA, USA, 90265 2: LTCI, Télécom Paris, Institut Polytechnique de Paris, France 3: Department of Statistics, University of Oxford, UK 4: University of Virginia, Charlottesville, VA, USA, 22904
Pseudocode No The paper states, “The whole procedure is summarized as pseudocode in the supplementary document.”, indicating that the pseudocode is not present in the main body of the paper.
Open Source Code Yes We provide the source code to reproduce the experiments of this paper.2 2See https://github.com/kimiandj/gsw.
Open Datasets Yes To move to more realistic datasets, we considered GSW flows for the hand-written digit recognition dataset, MNIST... and Finally, we applied our methodology on a larger dataset, namely Celeb A [49].
Dataset Splits No The paper mentions using 'the training set of MNIST' and a 'pre-trained auto-encoder' for Celeb A, but it does not provide specific details on dataset split percentages, sample counts, or a detailed methodology for creating train/validation/test splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments, only general statements about 'computer science applications' or 'high-dimensional settings'.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experiments.
Experiment Setup No The paper mentions using 'the exact same optimization scheme for all methods' and 'L = 1 projection' with a '3-layer neural network' for the defining function, but it does not provide specific hyperparameters such as learning rates, batch sizes, or detailed optimizer settings for the experimental setup.