Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards Principled Methods for Training Generative Adversarial Networks

Authors: Martin Arjovsky, Leon Bottou

ICLR 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena.
Researcher Affiliation Collaboration Martin Arjovsky L eon Bottou Courant Institute of Mathematical Sciences Facebook AI Research EMAIL EMAIL
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. The content primarily consists of mathematical proofs and theoretical analyses.
Open Source Code No The paper does not include an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper mentions training a 'DCGAN' but does not specify the dataset used, nor does it provide concrete access information (link, citation with authors/year) for any publicly available or open dataset.
Dataset Splits No The paper does not provide specific details about train/validation/test dataset splits. It mentions training for a certain number of epochs but no percentages or sample counts for splits.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper does not provide a reproducible description of ancillary software, as it does not include specific version numbers for key software components or libraries.
Experiment Setup No The paper states that a 'DCGAN' was trained for '1, 10 and 25 epochs' and a discriminator was trained 'from scratch', but it does not provide specific experimental setup details like hyperparameters (e.g., learning rate, batch size) or optimizer settings.