Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
Authors: Chris Donahue, Zachary C. Lipton, Akshay Balsubramani, Julian McAuley
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm s ability to generate convincing, identity-matched photographs. |
| Researcher Affiliation | Collaboration | Chris Donahue Department of Music University of California, San Diego cdonahue@ucsd.edu Zachary C. Lipton Carnegie Mellon University Amazon AI zlipton@cmu.edu Akshay Balsubramani Department of Genetics Stanford University abalsubr@stanford.edu Julian Mc Auley Department of Computer Science University of California, San Diego jmcauley@eng.ucsd.edu |
| Pseudocode | Yes | See Algorithm 1 for SD-GAN training pseudocode. |
| Open Source Code | Yes | Source code: https://github.com/chrisdonahue/sdgan |
| Open Datasets | Yes | We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (Mc Auley et al., 2015). |
| Dataset Splits | Yes | We split the celebrities into subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). ... We use the same 80%, 10%, 10% split and again hash the images to ensure that the splits are disjoint. |
| Hardware Specification | No | This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and refers to prior work for architectures, but does not specify software dependencies with version numbers (e.g., Python, TensorFlow/PyTorch versions). |
| Experiment Setup | Yes | To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with hyperparameters α = 2e 4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). |