Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Semantic Interpolation in Implicit Models
Authors: Yannic Kilcher, Aurelien Lucchi, Thomas Hofmann
ICLR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. and 3 EXPERIMENTS Experimental results. The setup used for the experiments presented below closely follows popular setups in GAN research and is detailed in the Appendix. |
| Researcher Affiliation | Academia | Yannic Kilcher, Aur elien Lucchi, Thomas Hofmann Department of Computer Science ETH Zurich EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. |
| Dataset Splits | No | The paper does not explicitly provide specific details on training, validation, and test dataset splits, percentages, or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'DCGAN architecture', 'ReLU nonlinearities', 'batch normalization', and 'RMSProp' but does not specify their version numbers. |
| Experiment Setup | Yes | The latent space for all models is of dimension 100 and the scale parameters for both the normal and gamma distributions are set to 1.0. The networks are trained using RMSProp with a learning rate of 0.0003 and mini-batches of size 100. |