Sliced Wasserstein Auto-Encoders

Authors: Soheil Kolouri, Phillip E. Pope, Charles E. Martin, Gustavo K. Rohde

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets. In our experiments we used three image datasets, namely the MNIST dataset by Le Cun (1998), the Celeb Faces Attributes Dataset (Celeb A) by Liu et al. (2015), and the LSUN Bedroom Dataset by Yu et al. (2015).
Researcher Affiliation Collaboration Soheil Kolouri, Phillip E. Pope, & Charles E. Martin, Information and Systems Sciences Laboratory HRL Laboratories, LLC. Malibu, CA, USA {skolouri,pepope,cemartin}@hrl.com. Gustavo K. Rohde Department of Electrical Engineering University of Virginia Charlottesville, VA, USA gustavo@virginia.edu
Pseudocode Yes Algorithm 1 Sliced-Wasserstein Auto-Encoder (SWAE)
Open Source Code No The paper does not provide any specific links to open-source code for the described methodology, nor does it state that the code is publicly available.
Open Datasets Yes In our experiments we used three image datasets, namely the MNIST dataset by Le Cun (1998), the Celeb Faces Attributes Dataset (Celeb A) by Liu et al. (2015), and the LSUN Bedroom Dataset by Yu et al. (2015).
Dataset Splits No The paper mentions using a "training set" and "testing samples" but does not specify the exact percentages or counts for training, validation, or test splits. It does not describe the splitting methodology.
Hardware Specification Yes For the MNIST experiment and on a single NVIDIA Tesla P100 GPU, each batch iteration (batchsize=500) of WAEGAN took 0.2571 ± 0.0435(sec) while SWAE (with L = 50 projections) took 0.2437 ± 0.0391(sec).
Software Dependencies No The paper mentions using a "DCGAN Radford et al. (2015) architecture" but does not provide specific version numbers for any software libraries, frameworks, or dependencies used in their implementation.
Experiment Setup Yes Require: Regularization coefficient λ, and number of random projections, L. In each iteration, let {xm p X}M m=1 and { zm q Z}M m=1 be i.i.d random samples from the input data and the predefined distribution, q Z, correspondingly. Let {θl}L l=1 be randomly sampled from a uniform distribution on Sd 1. For the MNIST experiment and on a single NVIDIA Tesla P100 GPU, each batch iteration (batchsize=500). We used a K = 64 dimensional latent spaces for both the Celeb A and the LSUN Bedroom datasets.