BourGAN: Generative Networks with Metric Embeddings
Authors: Chang Xiao, Peilin Zhong, Changxi Zheng
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features. |
| Researcher Affiliation | Academia | Chang Xiao Peilin Zhong Changxi Zheng Columbia University {chang, peilin, cxz}@cs.columbia.edu |
| Pseudocode | No | The paper describes algorithmic steps in text but does not include structured pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. It mentions appendices for experiment details but not code. |
| Open Datasets | Yes | For instance, when trained on ten hand-written digits (using MNIST dataset) [30], each digit represents a mode of data distribution... |
| Dataset Splits | No | For MNIST, we resize all images to 32 x 32 and train the GAN using all 60000 images. The paper describes training on datasets like MNIST but does not provide specific train/validation/test dataset splits or cross-validation details for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions implementing in PyTorch [45] but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use the Adam optimizer with a learning rate of 0.0001, β1 = 0.5, β2 = 0.999. The parameter β is chosen to be 0.1 (Equation 2). We choose σ empirically (σ = 0.1 for all our examples). |