Banach Wasserstein GAN

Authors: Jonas Adler, Sebastian Lunz

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare BWGAN with different norms on the CIFAR-10 and Celeb A datasets. To demonstrate computational feasibility and to show how the choice of norm can impact the trained generator, we implemented Banach Wasserstein GAN with various Sobolev and Lp norms, applied to CIFAR-10 and Celeb A (64 64 pixels).
Researcher Affiliation Collaboration Jonas Adler Department of Mathematics KTH Royal institute of Technology Research and Physics Elekta jonasadl@kth.se Sebastian Lunz Department of Applied Mathematics and Theoretical Physics University of Cambridge lunz@math.cam.ac.uk
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our implementation is available online1. 1https://github.com/adler-j/bwgan
Open Datasets Yes We compare BWGAN with different norms on the CIFAR-10 and Celeb A datasets.
Dataset Splits No The paper mentions using CIFAR-10 and Celeb A datasets and discusses training parameters like batch size and optimizer details. However, it does not explicitly provide specific details on how the datasets were split into training, validation, or test sets (e.g., percentages or sample counts for each split).
Hardware Specification No No specific hardware details (such as GPU models, CPU types, or memory specifications) used for running the experiments are mentioned in the paper. It only describes the software implementation.
Software Dependencies No The paper mentions that "The implementation was done in Tensor Flow" and "For training we used the Adam optimizer [10]". However, it does not provide specific version numbers for TensorFlow, Adam, or any other software libraries or dependencies used in the experiments.
Experiment Setup Yes For training we used the Adam optimizer [10] with learning rate decaying linearly from 2 10 4 to 0 over 100 000 iterations with β1 = 0, β2 = 0.9. We used 5 discriminator updates per generator update. The batch size used was 64.