On Predicting Generalization using GANs

Authors: Yi Zhang, Arushi Gupta, Nikunj Saunshi, Sanjeev Arora

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In particular, in Section 3.1 and 3.2 we evaluate on the PGDL and DEMOGEN benchmarks of predicting generalization and present strong results.
Researcher Affiliation Collaboration 1Princeton University, Computer Science Department {y.zhang, arushig, nsaunshi, arora}@cs.princeton.edu 2Microsoft Research
Pseudocode Yes Algorithm 1 Predicting test performance
Open Source Code No The paper mentions using 'pre-trained Big GAN+Diff Aug models... from the Studio GAN library' with a link to its GitHub repository, but does not state that the authors are releasing their own implementation code for the specific methodology described in the paper.
Open Datasets Yes In particular, in Section 3.1 and 3.2 we evaluate on the PGDL and DEMOGEN benchmarks of predicting generalization and present strong results. This is verified for families of well-known GANs and datasets including primarily CIFAR-10/100, Tiny Image Net
Dataset Splits No The paper refers to 'training set Strain' and 'test set Stest' and uses a 'synthetic dataset Ssyn' for prediction, but does not explicitly provide details about a distinct validation set or specific train/validation/test dataset splits for reproduction.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions using models from the 'Studio GAN library' and implies PyTorch (via its GitHub link), but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We use SGD with momentum 0.9 and batch size 128 and data augmention of horizontal flips for training all classifiers.