Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the "steerability" of generative adversarial networks
Authors: Ali Jahanian*, Lucy Chai*, Phillip Isola
ICLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. |
| Researcher Affiliation | Academia | Ali Jahanian*, Lucy Chai*, & Phillip Isola Massachusetts Institute of Technology Cambridge, MA 02139, USA EMAIL |
| Pseudocode | No | No clearly labeled 'Pseudocode' or 'Algorithm' block was found. The methods are described using mathematical equations and textual descriptions. |
| Open Source Code | Yes | Code is released on our project page: https://ali-design.github.io/gan_steerability/. |
| Open Datasets | Yes | We demonstrate our approach using Big GAN (Brock et al., 2018), a class-conditional GAN trained on 1000 Image Net categories. |
| Dataset Splits | No | No explicit training, validation, or test split percentages or sample counts were found for the datasets used in their experiments. The paper mentions using ImageNet and MNIST but does not specify how the data was partitioned for training, validation, and testing. |
| Hardware Specification | No | No specific hardware details (such as GPU or CPU models, memory specifications, or cloud computing instance types) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'tensorflow' but does not provide a specific version number for it or any other key software dependencies. |
| Experiment Setup | Yes | We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space z. We share the vector w across all Image Net categories for the Big GAN model. ... learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of α). |