A Classification-Based Study of Covariate Shift in GAN Distributions

Authors: Shibani Santurkar, Ludwig Schmidt, Aleksander Madry

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now describe results from our experimental studies of covariate shift in GANs based on the procedures outlined in Section 3, using the setup from Section 4.
Researcher Affiliation Academia 1Massachusetts Institute of Technology. Correspondence to: Shibani Santurkar <shibani@mit.edu>, Ludwig Schmidt <ludwigs@mit.edu>, Aleksander Madry <madry@mit.edu>.
Pseudocode No The paper describes experimental procedures as numbered steps within paragraphs but does not provide formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets Yes We chose five popular GANs and studied them on the Celeb A and LSUN datasets arguably the two most well known datasets in the context of GANs. Conveniently, these datasets also have rich annotations, making them particularly suited for our classification based evaluations. (Section 4.1) ... Celeb A (Liu et al., 2015) and LSUN (Yu et al., 2015) datasets...
Dataset Splits No The paper mentions 'train' and 'test' data, but does not explicitly specify validation dataset splits or methodology for validation in the main text.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models.
Software Dependencies No The paper mentions using standard implementations and specific models (e.g., '32-Layer Res Net', 'Linear Model') but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes In the following sections we describe the setup and results for our classification-based GAN diversity studies. (Section 4) ... The paper details the steps for measuring 'Mode Collapse' (Section 3.1) and 'Boundary Distortion' (Section 3.2), including how synthetic datasets are generated and how classifiers are trained. It also states: 'The same architecture and hyperparameter settings were used for all datasets (true and GAN derived) in any given comparison of classification performance.' (Section 4.2)