Is Generator Conditioning Causally Related to GAN Performance?

Authors: Augustus Odena, Jacob Buckman, Catherine Olsson, Tom Brown, Christopher Olah, Colin Raffel, Ian Goodfellow

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the hypothesis that this relationship is causal by proposing a regularization technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets and greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs.
Researcher Affiliation Industry Augustus Odena 1 Jacob Buckman 1 Catherine Olsson 1 Tom B. Brown 1 Christopher Olah 1 Colin Raffel 1 Ian Goodfellow 1 1Google Brain. Correspondence to: Augustus Odena <augustusodena@google.com>.
Pseudocode Yes Algorithm 1 Jacobian Clamping
Open Source Code No The paper mentions using a baseline implementation from a GitHub repository ('https://github.com/igul222/improved_wgan_training'), but it does not provide an explicit statement or link for the open-sourcing of the authors' own methodology (Jacobian Clamping) code.
Open Datasets Yes We test GANs trained on three datasets: MNIST, CIFAR-10, and STL-10 (Le Cun et al., 1998; Krizhevsky, 2009; Coates et al., 2011).
Dataset Splits No The paper refers to the use of well-known datasets (MNIST, CIFAR-10, STL-10) which often have standard splits, but it does not explicitly state the specific training, validation, or test split percentages or sample counts used in their experiments.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run the experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper mentions 'TensorFlow' in the acknowledgements for Jacobian computation, but it does not specify a version number for TensorFlow or any other software dependencies crucial for reproducibility.
Experiment Setup Yes The hyperparameters we use are those from Radford et al. (2015), except that we modified the generator where appropriate so that the output would be of the right size. ... Specifically, we train the same models as from the previous section using Jacobian Clamping with a λmax of 20, a λmin of 1, and ϵ of 1 and hold everything else the same.