Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
Authors: Chris Donahue, Zachary C. Lipton, Akshay Balsubramani, Julian McAuley
ICLR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm s ability to generate convincing, identity-matched photographs. |
| Researcher Affiliation | Collaboration | Chris Donahue Department of Music University of California, San Diego EMAIL Zachary C. Lipton Carnegie Mellon University Amazon AI EMAIL Akshay Balsubramani Department of Genetics Stanford University EMAIL Julian Mc Auley Department of Computer Science University of California, San Diego EMAIL |
| Pseudocode | Yes | See Algorithm 1 for SD-GAN training pseudocode. |
| Open Source Code | Yes | Source code: https://github.com/chrisdonahue/sdgan |
| Open Datasets | Yes | We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (Mc Auley et al., 2015). |
| Dataset Splits | Yes | We split the celebrities into subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). ... We use the same 80%, 10%, 10% split and again hash the images to ensure that the splits are disjoint. |
| Hardware Specification | No | This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and refers to prior work for architectures, but does not specify software dependencies with version numbers (e.g., Python, TensorFlow/PyTorch versions). |
| Experiment Setup | Yes | To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with hyperparameters α = 2e 4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). |